2026-03-10 00:00:09.700078 | Job console starting 2026-03-10 00:00:09.721053 | Updating git repos 2026-03-10 00:00:09.820660 | Cloning repos into workspace 2026-03-10 00:00:10.052774 | Restoring repo states 2026-03-10 00:00:10.078032 | Merging changes 2026-03-10 00:00:10.078052 | Checking out repos 2026-03-10 00:00:10.488372 | Preparing playbooks 2026-03-10 00:00:11.558307 | Running Ansible setup 2026-03-10 00:00:18.458189 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-10 00:00:20.278704 | 2026-03-10 00:00:20.278902 | PLAY [Base pre] 2026-03-10 00:00:20.294171 | 2026-03-10 00:00:20.294290 | TASK [Setup log path fact] 2026-03-10 00:00:20.313808 | orchestrator | ok 2026-03-10 00:00:20.349846 | 2026-03-10 00:00:20.350026 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-10 00:00:20.383916 | orchestrator | ok 2026-03-10 00:00:20.397005 | 2026-03-10 00:00:20.397118 | TASK [emit-job-header : Print job information] 2026-03-10 00:00:20.437058 | # Job Information 2026-03-10 00:00:20.437227 | Ansible Version: 2.16.14 2026-03-10 00:00:20.437261 | Job: testbed-deploy-stable-in-a-nutshell-with-tempest-ubuntu-24.04 2026-03-10 00:00:20.437295 | Pipeline: periodic-midnight 2026-03-10 00:00:20.437318 | Executor: 521e9411259a 2026-03-10 00:00:20.437338 | Triggered by: https://github.com/osism/testbed 2026-03-10 00:00:20.437360 | Event ID: 7ceccf5167b1458eb5435531ac7a90c1 2026-03-10 00:00:20.446458 | 2026-03-10 00:00:20.446568 | LOOP [emit-job-header : Print node information] 2026-03-10 00:00:20.773131 | orchestrator | ok: 2026-03-10 00:00:20.773913 | orchestrator | # Node Information 2026-03-10 00:00:20.773974 | orchestrator | Inventory Hostname: orchestrator 2026-03-10 00:00:20.774002 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-10 00:00:20.774025 | orchestrator | Username: zuul-testbed03 2026-03-10 00:00:20.774047 | orchestrator | Distro: Debian 12.13 2026-03-10 00:00:20.774071 | orchestrator | Provider: static-testbed 2026-03-10 00:00:20.774092 | orchestrator | Region: 2026-03-10 00:00:20.774113 | orchestrator | Label: testbed-orchestrator 2026-03-10 00:00:20.774132 | orchestrator | Product Name: OpenStack Nova 2026-03-10 00:00:20.774151 | orchestrator | Interface IP: 81.163.193.140 2026-03-10 00:00:20.798217 | 2026-03-10 00:00:20.798325 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-10 00:00:22.150136 | orchestrator -> localhost | changed 2026-03-10 00:00:22.157657 | 2026-03-10 00:00:22.157765 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-10 00:00:24.090791 | orchestrator -> localhost | changed 2026-03-10 00:00:24.118264 | 2026-03-10 00:00:24.118374 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-10 00:00:24.763324 | orchestrator -> localhost | ok 2026-03-10 00:00:24.770191 | 2026-03-10 00:00:24.770292 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-10 00:00:24.813228 | orchestrator | ok 2026-03-10 00:00:24.855314 | orchestrator | included: /var/lib/zuul/builds/a7702c3a2f3542948a8cdd7d9e63fdb8/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-10 00:00:24.877570 | 2026-03-10 00:00:24.877679 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-10 00:00:28.219765 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-10 00:00:28.219981 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/a7702c3a2f3542948a8cdd7d9e63fdb8/work/a7702c3a2f3542948a8cdd7d9e63fdb8_id_rsa 2026-03-10 00:00:28.220021 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/a7702c3a2f3542948a8cdd7d9e63fdb8/work/a7702c3a2f3542948a8cdd7d9e63fdb8_id_rsa.pub 2026-03-10 00:00:28.220049 | orchestrator -> localhost | The key fingerprint is: 2026-03-10 00:00:28.220077 | orchestrator -> localhost | SHA256:cu7t6Tuywh07pwnmNvLS4pxCzXnb+H+QnUiCiLF3eXw zuul-build-sshkey 2026-03-10 00:00:28.220099 | orchestrator -> localhost | The key's randomart image is: 2026-03-10 00:00:28.220128 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-10 00:00:28.220151 | orchestrator -> localhost | | | 2026-03-10 00:00:28.220173 | orchestrator -> localhost | | . | 2026-03-10 00:00:28.220194 | orchestrator -> localhost | | + . + | 2026-03-10 00:00:28.220213 | orchestrator -> localhost | | o o + + E | 2026-03-10 00:00:28.220233 | orchestrator -> localhost | | + o..S+ + . | 2026-03-10 00:00:28.220257 | orchestrator -> localhost | | . + .+. + o | 2026-03-10 00:00:28.220277 | orchestrator -> localhost | | . +o=.o . | 2026-03-10 00:00:28.220298 | orchestrator -> localhost | | ..++Oo*oo.. | 2026-03-10 00:00:28.220318 | orchestrator -> localhost | | o+*o+*XB+ | 2026-03-10 00:00:28.220338 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-10 00:00:28.220393 | orchestrator -> localhost | ok: Runtime: 0:00:02.586169 2026-03-10 00:00:28.227910 | 2026-03-10 00:00:28.229795 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-10 00:00:28.324470 | orchestrator | ok 2026-03-10 00:00:28.357622 | orchestrator | included: /var/lib/zuul/builds/a7702c3a2f3542948a8cdd7d9e63fdb8/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-10 00:00:28.402978 | 2026-03-10 00:00:28.403089 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-10 00:00:28.457310 | orchestrator | skipping: Conditional result was False 2026-03-10 00:00:28.466786 | 2026-03-10 00:00:28.468528 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-10 00:00:29.723651 | orchestrator | changed 2026-03-10 00:00:29.728710 | 2026-03-10 00:00:29.728788 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-10 00:00:30.057802 | orchestrator | ok 2026-03-10 00:00:30.068088 | 2026-03-10 00:00:30.068185 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-10 00:00:30.610767 | orchestrator | ok 2026-03-10 00:00:30.623382 | 2026-03-10 00:00:30.623483 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-10 00:00:31.215629 | orchestrator | ok 2026-03-10 00:00:31.220465 | 2026-03-10 00:00:31.220539 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-10 00:00:31.267033 | orchestrator | skipping: Conditional result was False 2026-03-10 00:00:31.272537 | 2026-03-10 00:00:31.272633 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-10 00:00:32.397648 | orchestrator -> localhost | changed 2026-03-10 00:00:32.409477 | 2026-03-10 00:00:32.409573 | TASK [add-build-sshkey : Add back temp key] 2026-03-10 00:00:33.334805 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/a7702c3a2f3542948a8cdd7d9e63fdb8/work/a7702c3a2f3542948a8cdd7d9e63fdb8_id_rsa (zuul-build-sshkey) 2026-03-10 00:00:33.335057 | orchestrator -> localhost | ok: Runtime: 0:00:00.024314 2026-03-10 00:00:33.350415 | 2026-03-10 00:00:33.350501 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-10 00:00:33.923347 | orchestrator | ok 2026-03-10 00:00:33.928256 | 2026-03-10 00:00:33.928339 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-10 00:00:33.972374 | orchestrator | skipping: Conditional result was False 2026-03-10 00:00:34.062691 | 2026-03-10 00:00:34.062789 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-10 00:00:34.711353 | orchestrator | ok 2026-03-10 00:00:34.737588 | 2026-03-10 00:00:34.737691 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-10 00:00:34.785483 | orchestrator | ok 2026-03-10 00:00:34.797387 | 2026-03-10 00:00:34.797476 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-10 00:00:35.375579 | orchestrator -> localhost | ok 2026-03-10 00:00:35.382630 | 2026-03-10 00:00:35.382726 | TASK [validate-host : Collect information about the host] 2026-03-10 00:00:36.928184 | orchestrator | ok 2026-03-10 00:00:36.954851 | 2026-03-10 00:00:36.954998 | TASK [validate-host : Sanitize hostname] 2026-03-10 00:00:37.133439 | orchestrator | ok 2026-03-10 00:00:37.137837 | 2026-03-10 00:00:37.137925 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-10 00:00:38.604342 | orchestrator -> localhost | changed 2026-03-10 00:00:38.609618 | 2026-03-10 00:00:38.609713 | TASK [validate-host : Collect information about zuul worker] 2026-03-10 00:00:39.408260 | orchestrator | ok 2026-03-10 00:00:39.412780 | 2026-03-10 00:00:39.412869 | TASK [validate-host : Write out all zuul information for each host] 2026-03-10 00:00:41.346606 | orchestrator -> localhost | changed 2026-03-10 00:00:41.356969 | 2026-03-10 00:00:41.358705 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-10 00:00:41.664817 | orchestrator | ok 2026-03-10 00:00:41.677370 | 2026-03-10 00:00:41.677470 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-10 00:01:59.439054 | orchestrator | changed: 2026-03-10 00:01:59.440580 | orchestrator | .d..t...... src/ 2026-03-10 00:01:59.440648 | orchestrator | .d..t...... src/github.com/ 2026-03-10 00:01:59.440682 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-10 00:01:59.440711 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-10 00:01:59.440739 | orchestrator | RedHat.yml 2026-03-10 00:01:59.458039 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-10 00:01:59.458056 | orchestrator | RedHat.yml 2026-03-10 00:01:59.458106 | orchestrator | = 1.53.0"... 2026-03-10 00:02:13.193155 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-03-10 00:02:13.345129 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-10 00:02:16.423931 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-10 00:02:16.518040 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-10 00:02:17.242280 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-10 00:02:17.624762 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-10 00:02:18.245038 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-10 00:02:18.245083 | orchestrator | 2026-03-10 00:02:18.245089 | orchestrator | Providers are signed by their developers. 2026-03-10 00:02:18.245094 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-10 00:02:18.245099 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-10 00:02:18.245105 | orchestrator | 2026-03-10 00:02:18.245109 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-10 00:02:18.245114 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-10 00:02:18.245126 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-10 00:02:18.245130 | orchestrator | you run "tofu init" in the future. 2026-03-10 00:02:18.245429 | orchestrator | 2026-03-10 00:02:18.245476 | orchestrator | OpenTofu has been successfully initialized! 2026-03-10 00:02:18.245482 | orchestrator | 2026-03-10 00:02:18.245486 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-10 00:02:18.245491 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-10 00:02:18.245495 | orchestrator | should now work. 2026-03-10 00:02:18.245499 | orchestrator | 2026-03-10 00:02:18.245503 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-10 00:02:18.245507 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-10 00:02:18.245511 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-10 00:02:18.410505 | orchestrator | Created and switched to workspace "ci"! 2026-03-10 00:02:18.410576 | orchestrator | 2026-03-10 00:02:18.410586 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-10 00:02:18.410595 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-10 00:02:18.410602 | orchestrator | for this configuration. 2026-03-10 00:02:18.558649 | orchestrator | ci.auto.tfvars 2026-03-10 00:02:19.476177 | orchestrator | default_custom.tf 2026-03-10 00:02:20.636317 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-10 00:02:21.156389 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-10 00:02:21.454080 | orchestrator | 2026-03-10 00:02:21.454129 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-10 00:02:21.454136 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-10 00:02:21.454140 | orchestrator | + create 2026-03-10 00:02:21.454145 | orchestrator | <= read (data resources) 2026-03-10 00:02:21.454149 | orchestrator | 2026-03-10 00:02:21.454154 | orchestrator | OpenTofu will perform the following actions: 2026-03-10 00:02:21.454158 | orchestrator | 2026-03-10 00:02:21.454162 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-10 00:02:21.454167 | orchestrator | # (config refers to values not yet known) 2026-03-10 00:02:21.454171 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-10 00:02:21.454175 | orchestrator | + checksum = (known after apply) 2026-03-10 00:02:21.454179 | orchestrator | + created_at = (known after apply) 2026-03-10 00:02:21.454183 | orchestrator | + file = (known after apply) 2026-03-10 00:02:21.454187 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.454207 | orchestrator | + metadata = (known after apply) 2026-03-10 00:02:21.454211 | orchestrator | + min_disk_gb = (known after apply) 2026-03-10 00:02:21.454215 | orchestrator | + min_ram_mb = (known after apply) 2026-03-10 00:02:21.454219 | orchestrator | + most_recent = true 2026-03-10 00:02:21.454223 | orchestrator | + name = (known after apply) 2026-03-10 00:02:21.454227 | orchestrator | + protected = (known after apply) 2026-03-10 00:02:21.454231 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.454237 | orchestrator | + schema = (known after apply) 2026-03-10 00:02:21.454241 | orchestrator | + size_bytes = (known after apply) 2026-03-10 00:02:21.454244 | orchestrator | + tags = (known after apply) 2026-03-10 00:02:21.454248 | orchestrator | + updated_at = (known after apply) 2026-03-10 00:02:21.454252 | orchestrator | } 2026-03-10 00:02:21.454256 | orchestrator | 2026-03-10 00:02:21.454260 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-10 00:02:21.454264 | orchestrator | # (config refers to values not yet known) 2026-03-10 00:02:21.454268 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-10 00:02:21.454272 | orchestrator | + checksum = (known after apply) 2026-03-10 00:02:21.454276 | orchestrator | + created_at = (known after apply) 2026-03-10 00:02:21.454279 | orchestrator | + file = (known after apply) 2026-03-10 00:02:21.454283 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.454287 | orchestrator | + metadata = (known after apply) 2026-03-10 00:02:21.454291 | orchestrator | + min_disk_gb = (known after apply) 2026-03-10 00:02:21.454294 | orchestrator | + min_ram_mb = (known after apply) 2026-03-10 00:02:21.454298 | orchestrator | + most_recent = true 2026-03-10 00:02:21.454302 | orchestrator | + name = (known after apply) 2026-03-10 00:02:21.454306 | orchestrator | + protected = (known after apply) 2026-03-10 00:02:21.454309 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.454313 | orchestrator | + schema = (known after apply) 2026-03-10 00:02:21.454317 | orchestrator | + size_bytes = (known after apply) 2026-03-10 00:02:21.454321 | orchestrator | + tags = (known after apply) 2026-03-10 00:02:21.454324 | orchestrator | + updated_at = (known after apply) 2026-03-10 00:02:21.454328 | orchestrator | } 2026-03-10 00:02:21.454332 | orchestrator | 2026-03-10 00:02:21.454336 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-10 00:02:21.454339 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-10 00:02:21.454343 | orchestrator | + content = (known after apply) 2026-03-10 00:02:21.454347 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-10 00:02:21.454351 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-10 00:02:21.454355 | orchestrator | + content_md5 = (known after apply) 2026-03-10 00:02:21.454359 | orchestrator | + content_sha1 = (known after apply) 2026-03-10 00:02:21.454362 | orchestrator | + content_sha256 = (known after apply) 2026-03-10 00:02:21.454366 | orchestrator | + content_sha512 = (known after apply) 2026-03-10 00:02:21.454370 | orchestrator | + directory_permission = "0777" 2026-03-10 00:02:21.454374 | orchestrator | + file_permission = "0644" 2026-03-10 00:02:21.454378 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-10 00:02:21.454381 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.454385 | orchestrator | } 2026-03-10 00:02:21.454389 | orchestrator | 2026-03-10 00:02:21.454393 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-10 00:02:21.454396 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-10 00:02:21.454400 | orchestrator | + content = (known after apply) 2026-03-10 00:02:21.454404 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-10 00:02:21.454408 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-10 00:02:21.454411 | orchestrator | + content_md5 = (known after apply) 2026-03-10 00:02:21.454415 | orchestrator | + content_sha1 = (known after apply) 2026-03-10 00:02:21.454419 | orchestrator | + content_sha256 = (known after apply) 2026-03-10 00:02:21.454423 | orchestrator | + content_sha512 = (known after apply) 2026-03-10 00:02:21.454426 | orchestrator | + directory_permission = "0777" 2026-03-10 00:02:21.454430 | orchestrator | + file_permission = "0644" 2026-03-10 00:02:21.454438 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-10 00:02:21.454441 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.454445 | orchestrator | } 2026-03-10 00:02:21.454449 | orchestrator | 2026-03-10 00:02:21.454458 | orchestrator | # local_file.inventory will be created 2026-03-10 00:02:21.454462 | orchestrator | + resource "local_file" "inventory" { 2026-03-10 00:02:21.454466 | orchestrator | + content = (known after apply) 2026-03-10 00:02:21.454469 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-10 00:02:21.454473 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-10 00:02:21.454477 | orchestrator | + content_md5 = (known after apply) 2026-03-10 00:02:21.454480 | orchestrator | + content_sha1 = (known after apply) 2026-03-10 00:02:21.454484 | orchestrator | + content_sha256 = (known after apply) 2026-03-10 00:02:21.454488 | orchestrator | + content_sha512 = (known after apply) 2026-03-10 00:02:21.454492 | orchestrator | + directory_permission = "0777" 2026-03-10 00:02:21.454496 | orchestrator | + file_permission = "0644" 2026-03-10 00:02:21.454499 | orchestrator | + filename = "inventory.ci" 2026-03-10 00:02:21.454503 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.454507 | orchestrator | } 2026-03-10 00:02:21.454510 | orchestrator | 2026-03-10 00:02:21.454514 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-10 00:02:21.454518 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-10 00:02:21.454522 | orchestrator | + content = (sensitive value) 2026-03-10 00:02:21.454525 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-10 00:02:21.454529 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-10 00:02:21.454533 | orchestrator | + content_md5 = (known after apply) 2026-03-10 00:02:21.454537 | orchestrator | + content_sha1 = (known after apply) 2026-03-10 00:02:21.454540 | orchestrator | + content_sha256 = (known after apply) 2026-03-10 00:02:21.454551 | orchestrator | + content_sha512 = (known after apply) 2026-03-10 00:02:21.454556 | orchestrator | + directory_permission = "0700" 2026-03-10 00:02:21.454559 | orchestrator | + file_permission = "0600" 2026-03-10 00:02:21.454563 | orchestrator | + filename = ".id_rsa.ci" 2026-03-10 00:02:21.454567 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.454571 | orchestrator | } 2026-03-10 00:02:21.454574 | orchestrator | 2026-03-10 00:02:21.454578 | orchestrator | # null_resource.node_semaphore will be created 2026-03-10 00:02:21.454582 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-10 00:02:21.454586 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.454589 | orchestrator | } 2026-03-10 00:02:21.454593 | orchestrator | 2026-03-10 00:02:21.454597 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-10 00:02:21.454601 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-10 00:02:21.454604 | orchestrator | + attachment = (known after apply) 2026-03-10 00:02:21.454608 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:21.454612 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.454616 | orchestrator | + image_id = (known after apply) 2026-03-10 00:02:21.454620 | orchestrator | + metadata = (known after apply) 2026-03-10 00:02:21.454623 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-10 00:02:21.454627 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.454631 | orchestrator | + size = 80 2026-03-10 00:02:21.454635 | orchestrator | + volume_retype_policy = "never" 2026-03-10 00:02:21.454638 | orchestrator | + volume_type = "ssd" 2026-03-10 00:02:21.454642 | orchestrator | } 2026-03-10 00:02:21.454646 | orchestrator | 2026-03-10 00:02:21.454650 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-10 00:02:21.454653 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-10 00:02:21.454657 | orchestrator | + attachment = (known after apply) 2026-03-10 00:02:21.454661 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:21.454681 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.454689 | orchestrator | + image_id = (known after apply) 2026-03-10 00:02:21.454692 | orchestrator | + metadata = (known after apply) 2026-03-10 00:02:21.454696 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-10 00:02:21.454700 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.454704 | orchestrator | + size = 80 2026-03-10 00:02:21.454707 | orchestrator | + volume_retype_policy = "never" 2026-03-10 00:02:21.454711 | orchestrator | + volume_type = "ssd" 2026-03-10 00:02:21.454715 | orchestrator | } 2026-03-10 00:02:21.454718 | orchestrator | 2026-03-10 00:02:21.454722 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-10 00:02:21.454726 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-10 00:02:21.454755 | orchestrator | + attachment = (known after apply) 2026-03-10 00:02:21.454760 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:21.454763 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.454767 | orchestrator | + image_id = (known after apply) 2026-03-10 00:02:21.454771 | orchestrator | + metadata = (known after apply) 2026-03-10 00:02:21.454775 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-10 00:02:21.454779 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.454784 | orchestrator | + size = 80 2026-03-10 00:02:21.454788 | orchestrator | + volume_retype_policy = "never" 2026-03-10 00:02:21.454792 | orchestrator | + volume_type = "ssd" 2026-03-10 00:02:21.454796 | orchestrator | } 2026-03-10 00:02:21.454801 | orchestrator | 2026-03-10 00:02:21.454805 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-10 00:02:21.454809 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-10 00:02:21.454813 | orchestrator | + attachment = (known after apply) 2026-03-10 00:02:21.454817 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:21.454821 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.454825 | orchestrator | + image_id = (known after apply) 2026-03-10 00:02:21.454829 | orchestrator | + metadata = (known after apply) 2026-03-10 00:02:21.454833 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-10 00:02:21.454837 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.454841 | orchestrator | + size = 80 2026-03-10 00:02:21.454844 | orchestrator | + volume_retype_policy = "never" 2026-03-10 00:02:21.454848 | orchestrator | + volume_type = "ssd" 2026-03-10 00:02:21.454852 | orchestrator | } 2026-03-10 00:02:21.454856 | orchestrator | 2026-03-10 00:02:21.454859 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-10 00:02:21.454863 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-10 00:02:21.454867 | orchestrator | + attachment = (known after apply) 2026-03-10 00:02:21.454871 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:21.454875 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.454878 | orchestrator | + image_id = (known after apply) 2026-03-10 00:02:21.454882 | orchestrator | + metadata = (known after apply) 2026-03-10 00:02:21.454932 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-10 00:02:21.454938 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.454941 | orchestrator | + size = 80 2026-03-10 00:02:21.454945 | orchestrator | + volume_retype_policy = "never" 2026-03-10 00:02:21.454949 | orchestrator | + volume_type = "ssd" 2026-03-10 00:02:21.454953 | orchestrator | } 2026-03-10 00:02:21.454957 | orchestrator | 2026-03-10 00:02:21.454960 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-10 00:02:21.454964 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-10 00:02:21.454968 | orchestrator | + attachment = (known after apply) 2026-03-10 00:02:21.454972 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:21.454975 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.454983 | orchestrator | + image_id = (known after apply) 2026-03-10 00:02:21.454987 | orchestrator | + metadata = (known after apply) 2026-03-10 00:02:21.454990 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-10 00:02:21.454994 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.454998 | orchestrator | + size = 80 2026-03-10 00:02:21.455002 | orchestrator | + volume_retype_policy = "never" 2026-03-10 00:02:21.455005 | orchestrator | + volume_type = "ssd" 2026-03-10 00:02:21.455009 | orchestrator | } 2026-03-10 00:02:21.455013 | orchestrator | 2026-03-10 00:02:21.455017 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-10 00:02:21.455025 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-10 00:02:21.455029 | orchestrator | + attachment = (known after apply) 2026-03-10 00:02:21.455032 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:21.455037 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.455043 | orchestrator | + image_id = (known after apply) 2026-03-10 00:02:21.455050 | orchestrator | + metadata = (known after apply) 2026-03-10 00:02:21.455056 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-10 00:02:21.455060 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.455063 | orchestrator | + size = 80 2026-03-10 00:02:21.455067 | orchestrator | + volume_retype_policy = "never" 2026-03-10 00:02:21.455071 | orchestrator | + volume_type = "ssd" 2026-03-10 00:02:21.455074 | orchestrator | } 2026-03-10 00:02:21.455078 | orchestrator | 2026-03-10 00:02:21.455082 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-10 00:02:21.455086 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-10 00:02:21.455090 | orchestrator | + attachment = (known after apply) 2026-03-10 00:02:21.455094 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:21.455097 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.455101 | orchestrator | + metadata = (known after apply) 2026-03-10 00:02:21.455105 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-10 00:02:21.455109 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.455112 | orchestrator | + size = 20 2026-03-10 00:02:21.455116 | orchestrator | + volume_retype_policy = "never" 2026-03-10 00:02:21.455120 | orchestrator | + volume_type = "ssd" 2026-03-10 00:02:21.455124 | orchestrator | } 2026-03-10 00:02:21.455127 | orchestrator | 2026-03-10 00:02:21.455131 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-10 00:02:21.455135 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-10 00:02:21.455138 | orchestrator | + attachment = (known after apply) 2026-03-10 00:02:21.455142 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:21.455146 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.455150 | orchestrator | + metadata = (known after apply) 2026-03-10 00:02:21.455153 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-10 00:02:21.455157 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.455161 | orchestrator | + size = 20 2026-03-10 00:02:21.455164 | orchestrator | + volume_retype_policy = "never" 2026-03-10 00:02:21.455191 | orchestrator | + volume_type = "ssd" 2026-03-10 00:02:21.455196 | orchestrator | } 2026-03-10 00:02:21.455200 | orchestrator | 2026-03-10 00:02:21.455204 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-10 00:02:21.455208 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-10 00:02:21.455211 | orchestrator | + attachment = (known after apply) 2026-03-10 00:02:21.455215 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:21.455219 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.455223 | orchestrator | + metadata = (known after apply) 2026-03-10 00:02:21.455227 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-10 00:02:21.455230 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.455238 | orchestrator | + size = 20 2026-03-10 00:02:21.455242 | orchestrator | + volume_retype_policy = "never" 2026-03-10 00:02:21.455246 | orchestrator | + volume_type = "ssd" 2026-03-10 00:02:21.455249 | orchestrator | } 2026-03-10 00:02:21.455253 | orchestrator | 2026-03-10 00:02:21.455257 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-10 00:02:21.455261 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-10 00:02:21.455265 | orchestrator | + attachment = (known after apply) 2026-03-10 00:02:21.455268 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:21.455272 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.455276 | orchestrator | + metadata = (known after apply) 2026-03-10 00:02:21.455280 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-10 00:02:21.455283 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.455287 | orchestrator | + size = 20 2026-03-10 00:02:21.455291 | orchestrator | + volume_retype_policy = "never" 2026-03-10 00:02:21.455294 | orchestrator | + volume_type = "ssd" 2026-03-10 00:02:21.455298 | orchestrator | } 2026-03-10 00:02:21.455302 | orchestrator | 2026-03-10 00:02:21.455306 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-10 00:02:21.455309 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-10 00:02:21.455313 | orchestrator | + attachment = (known after apply) 2026-03-10 00:02:21.455317 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:21.455321 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.455324 | orchestrator | + metadata = (known after apply) 2026-03-10 00:02:21.455328 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-10 00:02:21.455332 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.455339 | orchestrator | + size = 20 2026-03-10 00:02:21.455343 | orchestrator | + volume_retype_policy = "never" 2026-03-10 00:02:21.455346 | orchestrator | + volume_type = "ssd" 2026-03-10 00:02:21.455350 | orchestrator | } 2026-03-10 00:02:21.455354 | orchestrator | 2026-03-10 00:02:21.455358 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-10 00:02:21.455362 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-10 00:02:21.455365 | orchestrator | + attachment = (known after apply) 2026-03-10 00:02:21.455369 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:21.455406 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.455410 | orchestrator | + metadata = (known after apply) 2026-03-10 00:02:21.455414 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-10 00:02:21.455417 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.455421 | orchestrator | + size = 20 2026-03-10 00:02:21.455425 | orchestrator | + volume_retype_policy = "never" 2026-03-10 00:02:21.455429 | orchestrator | + volume_type = "ssd" 2026-03-10 00:02:21.455432 | orchestrator | } 2026-03-10 00:02:21.455436 | orchestrator | 2026-03-10 00:02:21.455440 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-10 00:02:21.455444 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-10 00:02:21.455447 | orchestrator | + attachment = (known after apply) 2026-03-10 00:02:21.455451 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:21.455455 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.455462 | orchestrator | + metadata = (known after apply) 2026-03-10 00:02:21.455466 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-10 00:02:21.455470 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.455474 | orchestrator | + size = 20 2026-03-10 00:02:21.455478 | orchestrator | + volume_retype_policy = "never" 2026-03-10 00:02:21.455481 | orchestrator | + volume_type = "ssd" 2026-03-10 00:02:21.455485 | orchestrator | } 2026-03-10 00:02:21.455489 | orchestrator | 2026-03-10 00:02:21.455493 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-10 00:02:21.455497 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-10 00:02:21.455504 | orchestrator | + attachment = (known after apply) 2026-03-10 00:02:21.455508 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:21.455511 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.455515 | orchestrator | + metadata = (known after apply) 2026-03-10 00:02:21.455519 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-10 00:02:21.455523 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.455526 | orchestrator | + size = 20 2026-03-10 00:02:21.455530 | orchestrator | + volume_retype_policy = "never" 2026-03-10 00:02:21.455534 | orchestrator | + volume_type = "ssd" 2026-03-10 00:02:21.455538 | orchestrator | } 2026-03-10 00:02:21.455541 | orchestrator | 2026-03-10 00:02:21.455545 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-10 00:02:21.455549 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-10 00:02:21.455553 | orchestrator | + attachment = (known after apply) 2026-03-10 00:02:21.455556 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:21.455560 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.455564 | orchestrator | + metadata = (known after apply) 2026-03-10 00:02:21.455568 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-10 00:02:21.455571 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.455575 | orchestrator | + size = 20 2026-03-10 00:02:21.455579 | orchestrator | + volume_retype_policy = "never" 2026-03-10 00:02:21.455582 | orchestrator | + volume_type = "ssd" 2026-03-10 00:02:21.455586 | orchestrator | } 2026-03-10 00:02:21.455590 | orchestrator | 2026-03-10 00:02:21.455594 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-10 00:02:21.455597 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-10 00:02:21.455601 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-10 00:02:21.455605 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-10 00:02:21.455609 | orchestrator | + all_metadata = (known after apply) 2026-03-10 00:02:21.455612 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:21.455616 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:21.455620 | orchestrator | + config_drive = true 2026-03-10 00:02:21.455623 | orchestrator | + created = (known after apply) 2026-03-10 00:02:21.455627 | orchestrator | + flavor_id = (known after apply) 2026-03-10 00:02:21.455631 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-10 00:02:21.455635 | orchestrator | + force_delete = false 2026-03-10 00:02:21.455638 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-10 00:02:21.455642 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.455646 | orchestrator | + image_id = (known after apply) 2026-03-10 00:02:21.455650 | orchestrator | + image_name = (known after apply) 2026-03-10 00:02:21.455655 | orchestrator | + key_pair = "testbed" 2026-03-10 00:02:21.455662 | orchestrator | + name = "testbed-manager" 2026-03-10 00:02:21.455674 | orchestrator | + power_state = "active" 2026-03-10 00:02:21.455677 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.455681 | orchestrator | + security_groups = (known after apply) 2026-03-10 00:02:21.455685 | orchestrator | + stop_before_destroy = false 2026-03-10 00:02:21.455689 | orchestrator | + updated = (known after apply) 2026-03-10 00:02:21.455692 | orchestrator | + user_data = (sensitive value) 2026-03-10 00:02:21.455696 | orchestrator | 2026-03-10 00:02:21.455700 | orchestrator | + block_device { 2026-03-10 00:02:21.455704 | orchestrator | + boot_index = 0 2026-03-10 00:02:21.455707 | orchestrator | + delete_on_termination = false 2026-03-10 00:02:21.455714 | orchestrator | + destination_type = "volume" 2026-03-10 00:02:21.455718 | orchestrator | + multiattach = false 2026-03-10 00:02:21.455722 | orchestrator | + source_type = "volume" 2026-03-10 00:02:21.455725 | orchestrator | + uuid = (known after apply) 2026-03-10 00:02:21.455732 | orchestrator | } 2026-03-10 00:02:21.455736 | orchestrator | 2026-03-10 00:02:21.455740 | orchestrator | + network { 2026-03-10 00:02:21.455744 | orchestrator | + access_network = false 2026-03-10 00:02:21.455747 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-10 00:02:21.455751 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-10 00:02:21.455755 | orchestrator | + mac = (known after apply) 2026-03-10 00:02:21.455759 | orchestrator | + name = (known after apply) 2026-03-10 00:02:21.455762 | orchestrator | + port = (known after apply) 2026-03-10 00:02:21.455766 | orchestrator | + uuid = (known after apply) 2026-03-10 00:02:21.455770 | orchestrator | } 2026-03-10 00:02:21.455774 | orchestrator | } 2026-03-10 00:02:21.455777 | orchestrator | 2026-03-10 00:02:21.455781 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-10 00:02:21.455785 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-10 00:02:21.455789 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-10 00:02:21.455793 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-10 00:02:21.455796 | orchestrator | + all_metadata = (known after apply) 2026-03-10 00:02:21.455800 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:21.455804 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:21.455807 | orchestrator | + config_drive = true 2026-03-10 00:02:21.455811 | orchestrator | + created = (known after apply) 2026-03-10 00:02:21.455815 | orchestrator | + flavor_id = (known after apply) 2026-03-10 00:02:21.455819 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-10 00:02:21.455822 | orchestrator | + force_delete = false 2026-03-10 00:02:21.455826 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-10 00:02:21.455830 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.455834 | orchestrator | + image_id = (known after apply) 2026-03-10 00:02:21.455837 | orchestrator | + image_name = (known after apply) 2026-03-10 00:02:21.455841 | orchestrator | + key_pair = "testbed" 2026-03-10 00:02:21.455845 | orchestrator | + name = "testbed-node-0" 2026-03-10 00:02:21.455849 | orchestrator | + power_state = "active" 2026-03-10 00:02:21.455855 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.455859 | orchestrator | + security_groups = (known after apply) 2026-03-10 00:02:21.455863 | orchestrator | + stop_before_destroy = false 2026-03-10 00:02:21.455867 | orchestrator | + updated = (known after apply) 2026-03-10 00:02:21.455870 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-10 00:02:21.455874 | orchestrator | 2026-03-10 00:02:21.455878 | orchestrator | + block_device { 2026-03-10 00:02:21.455882 | orchestrator | + boot_index = 0 2026-03-10 00:02:21.455885 | orchestrator | + delete_on_termination = false 2026-03-10 00:02:21.455904 | orchestrator | + destination_type = "volume" 2026-03-10 00:02:21.455908 | orchestrator | + multiattach = false 2026-03-10 00:02:21.455912 | orchestrator | + source_type = "volume" 2026-03-10 00:02:21.455916 | orchestrator | + uuid = (known after apply) 2026-03-10 00:02:21.455920 | orchestrator | } 2026-03-10 00:02:21.455923 | orchestrator | 2026-03-10 00:02:21.455927 | orchestrator | + network { 2026-03-10 00:02:21.455931 | orchestrator | + access_network = false 2026-03-10 00:02:21.455935 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-10 00:02:21.455939 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-10 00:02:21.455942 | orchestrator | + mac = (known after apply) 2026-03-10 00:02:21.455946 | orchestrator | + name = (known after apply) 2026-03-10 00:02:21.455950 | orchestrator | + port = (known after apply) 2026-03-10 00:02:21.455954 | orchestrator | + uuid = (known after apply) 2026-03-10 00:02:21.455957 | orchestrator | } 2026-03-10 00:02:21.455961 | orchestrator | } 2026-03-10 00:02:21.455965 | orchestrator | 2026-03-10 00:02:21.455969 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-10 00:02:21.455972 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-10 00:02:21.455976 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-10 00:02:21.455983 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-10 00:02:21.455987 | orchestrator | + all_metadata = (known after apply) 2026-03-10 00:02:21.455991 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:21.455994 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:21.455998 | orchestrator | + config_drive = true 2026-03-10 00:02:21.456002 | orchestrator | + created = (known after apply) 2026-03-10 00:02:21.456005 | orchestrator | + flavor_id = (known after apply) 2026-03-10 00:02:21.456009 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-10 00:02:21.456013 | orchestrator | + force_delete = false 2026-03-10 00:02:21.456017 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-10 00:02:21.456021 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.456024 | orchestrator | + image_id = (known after apply) 2026-03-10 00:02:21.456028 | orchestrator | + image_name = (known after apply) 2026-03-10 00:02:21.456032 | orchestrator | + key_pair = "testbed" 2026-03-10 00:02:21.456035 | orchestrator | + name = "testbed-node-1" 2026-03-10 00:02:21.456039 | orchestrator | + power_state = "active" 2026-03-10 00:02:21.456043 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.456047 | orchestrator | + security_groups = (known after apply) 2026-03-10 00:02:21.456051 | orchestrator | + stop_before_destroy = false 2026-03-10 00:02:21.456054 | orchestrator | + updated = (known after apply) 2026-03-10 00:02:21.456058 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-10 00:02:21.456101 | orchestrator | 2026-03-10 00:02:21.456106 | orchestrator | + block_device { 2026-03-10 00:02:21.456110 | orchestrator | + boot_index = 0 2026-03-10 00:02:21.456114 | orchestrator | + delete_on_termination = false 2026-03-10 00:02:21.456117 | orchestrator | + destination_type = "volume" 2026-03-10 00:02:21.456121 | orchestrator | + multiattach = false 2026-03-10 00:02:21.456125 | orchestrator | + source_type = "volume" 2026-03-10 00:02:21.456146 | orchestrator | + uuid = (known after apply) 2026-03-10 00:02:21.456150 | orchestrator | } 2026-03-10 00:02:21.456153 | orchestrator | 2026-03-10 00:02:21.456157 | orchestrator | + network { 2026-03-10 00:02:21.456161 | orchestrator | + access_network = false 2026-03-10 00:02:21.456165 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-10 00:02:21.456168 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-10 00:02:21.456172 | orchestrator | + mac = (known after apply) 2026-03-10 00:02:21.456176 | orchestrator | + name = (known after apply) 2026-03-10 00:02:21.456180 | orchestrator | + port = (known after apply) 2026-03-10 00:02:21.456200 | orchestrator | + uuid = (known after apply) 2026-03-10 00:02:21.456204 | orchestrator | } 2026-03-10 00:02:21.456207 | orchestrator | } 2026-03-10 00:02:21.456211 | orchestrator | 2026-03-10 00:02:21.456215 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-10 00:02:21.456219 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-10 00:02:21.456223 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-10 00:02:21.456226 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-10 00:02:21.456230 | orchestrator | + all_metadata = (known after apply) 2026-03-10 00:02:21.456234 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:21.456241 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:21.456245 | orchestrator | + config_drive = true 2026-03-10 00:02:21.456249 | orchestrator | + created = (known after apply) 2026-03-10 00:02:21.456253 | orchestrator | + flavor_id = (known after apply) 2026-03-10 00:02:21.456257 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-10 00:02:21.456260 | orchestrator | + force_delete = false 2026-03-10 00:02:21.456264 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-10 00:02:21.456268 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.456272 | orchestrator | + image_id = (known after apply) 2026-03-10 00:02:21.456279 | orchestrator | + image_name = (known after apply) 2026-03-10 00:02:21.456282 | orchestrator | + key_pair = "testbed" 2026-03-10 00:02:21.456286 | orchestrator | + name = "testbed-node-2" 2026-03-10 00:02:21.456290 | orchestrator | + power_state = "active" 2026-03-10 00:02:21.456294 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.456297 | orchestrator | + security_groups = (known after apply) 2026-03-10 00:02:21.456301 | orchestrator | + stop_before_destroy = false 2026-03-10 00:02:21.456305 | orchestrator | + updated = (known after apply) 2026-03-10 00:02:21.456321 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-10 00:02:21.456325 | orchestrator | 2026-03-10 00:02:21.456329 | orchestrator | + block_device { 2026-03-10 00:02:21.456333 | orchestrator | + boot_index = 0 2026-03-10 00:02:21.456336 | orchestrator | + delete_on_termination = false 2026-03-10 00:02:21.456340 | orchestrator | + destination_type = "volume" 2026-03-10 00:02:21.456347 | orchestrator | + multiattach = false 2026-03-10 00:02:21.456351 | orchestrator | + source_type = "volume" 2026-03-10 00:02:21.456355 | orchestrator | + uuid = (known after apply) 2026-03-10 00:02:21.456359 | orchestrator | } 2026-03-10 00:02:21.456364 | orchestrator | 2026-03-10 00:02:21.456368 | orchestrator | + network { 2026-03-10 00:02:21.456372 | orchestrator | + access_network = false 2026-03-10 00:02:21.456376 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-10 00:02:21.456381 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-10 00:02:21.456385 | orchestrator | + mac = (known after apply) 2026-03-10 00:02:21.456388 | orchestrator | + name = (known after apply) 2026-03-10 00:02:21.456393 | orchestrator | + port = (known after apply) 2026-03-10 00:02:21.456397 | orchestrator | + uuid = (known after apply) 2026-03-10 00:02:21.456401 | orchestrator | } 2026-03-10 00:02:21.456405 | orchestrator | } 2026-03-10 00:02:21.456409 | orchestrator | 2026-03-10 00:02:21.456413 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-10 00:02:21.456417 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-10 00:02:21.456421 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-10 00:02:21.456425 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-10 00:02:21.456428 | orchestrator | + all_metadata = (known after apply) 2026-03-10 00:02:21.456432 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:21.456436 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:21.456440 | orchestrator | + config_drive = true 2026-03-10 00:02:21.456443 | orchestrator | + created = (known after apply) 2026-03-10 00:02:21.456447 | orchestrator | + flavor_id = (known after apply) 2026-03-10 00:02:21.456451 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-10 00:02:21.456455 | orchestrator | + force_delete = false 2026-03-10 00:02:21.456458 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-10 00:02:21.456462 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.456466 | orchestrator | + image_id = (known after apply) 2026-03-10 00:02:21.456470 | orchestrator | + image_name = (known after apply) 2026-03-10 00:02:21.456474 | orchestrator | + key_pair = "testbed" 2026-03-10 00:02:21.456477 | orchestrator | + name = "testbed-node-3" 2026-03-10 00:02:21.456481 | orchestrator | + power_state = "active" 2026-03-10 00:02:21.456485 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.456489 | orchestrator | + security_groups = (known after apply) 2026-03-10 00:02:21.456492 | orchestrator | + stop_before_destroy = false 2026-03-10 00:02:21.456496 | orchestrator | + updated = (known after apply) 2026-03-10 00:02:21.456500 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-10 00:02:21.456504 | orchestrator | 2026-03-10 00:02:21.456507 | orchestrator | + block_device { 2026-03-10 00:02:21.456517 | orchestrator | + boot_index = 0 2026-03-10 00:02:21.456521 | orchestrator | + delete_on_termination = false 2026-03-10 00:02:21.456525 | orchestrator | + destination_type = "volume" 2026-03-10 00:02:21.456531 | orchestrator | + multiattach = false 2026-03-10 00:02:21.456535 | orchestrator | + source_type = "volume" 2026-03-10 00:02:21.456539 | orchestrator | + uuid = (known after apply) 2026-03-10 00:02:21.456543 | orchestrator | } 2026-03-10 00:02:21.456546 | orchestrator | 2026-03-10 00:02:21.456550 | orchestrator | + network { 2026-03-10 00:02:21.456554 | orchestrator | + access_network = false 2026-03-10 00:02:21.456558 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-10 00:02:21.456561 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-10 00:02:21.456565 | orchestrator | + mac = (known after apply) 2026-03-10 00:02:21.456569 | orchestrator | + name = (known after apply) 2026-03-10 00:02:21.456573 | orchestrator | + port = (known after apply) 2026-03-10 00:02:21.456576 | orchestrator | + uuid = (known after apply) 2026-03-10 00:02:21.456580 | orchestrator | } 2026-03-10 00:02:21.456584 | orchestrator | } 2026-03-10 00:02:21.456588 | orchestrator | 2026-03-10 00:02:21.456591 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-10 00:02:21.456595 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-10 00:02:21.456599 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-10 00:02:21.456603 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-10 00:02:21.456607 | orchestrator | + all_metadata = (known after apply) 2026-03-10 00:02:21.456610 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:21.456614 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:21.456618 | orchestrator | + config_drive = true 2026-03-10 00:02:21.456621 | orchestrator | + created = (known after apply) 2026-03-10 00:02:21.456625 | orchestrator | + flavor_id = (known after apply) 2026-03-10 00:02:21.456629 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-10 00:02:21.456633 | orchestrator | + force_delete = false 2026-03-10 00:02:21.456636 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-10 00:02:21.456640 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.456644 | orchestrator | + image_id = (known after apply) 2026-03-10 00:02:21.456648 | orchestrator | + image_name = (known after apply) 2026-03-10 00:02:21.456651 | orchestrator | + key_pair = "testbed" 2026-03-10 00:02:21.456655 | orchestrator | + name = "testbed-node-4" 2026-03-10 00:02:21.456659 | orchestrator | + power_state = "active" 2026-03-10 00:02:21.456663 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.456666 | orchestrator | + security_groups = (known after apply) 2026-03-10 00:02:21.456670 | orchestrator | + stop_before_destroy = false 2026-03-10 00:02:21.456674 | orchestrator | + updated = (known after apply) 2026-03-10 00:02:21.456677 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-10 00:02:21.456681 | orchestrator | 2026-03-10 00:02:21.456685 | orchestrator | + block_device { 2026-03-10 00:02:21.456689 | orchestrator | + boot_index = 0 2026-03-10 00:02:21.456692 | orchestrator | + delete_on_termination = false 2026-03-10 00:02:21.456696 | orchestrator | + destination_type = "volume" 2026-03-10 00:02:21.456700 | orchestrator | + multiattach = false 2026-03-10 00:02:21.456704 | orchestrator | + source_type = "volume" 2026-03-10 00:02:21.456707 | orchestrator | + uuid = (known after apply) 2026-03-10 00:02:21.456711 | orchestrator | } 2026-03-10 00:02:21.456715 | orchestrator | 2026-03-10 00:02:21.456719 | orchestrator | + network { 2026-03-10 00:02:21.456722 | orchestrator | + access_network = false 2026-03-10 00:02:21.456726 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-10 00:02:21.456730 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-10 00:02:21.456734 | orchestrator | + mac = (known after apply) 2026-03-10 00:02:21.456737 | orchestrator | + name = (known after apply) 2026-03-10 00:02:21.456741 | orchestrator | + port = (known after apply) 2026-03-10 00:02:21.456748 | orchestrator | + uuid = (known after apply) 2026-03-10 00:02:21.456752 | orchestrator | } 2026-03-10 00:02:21.456755 | orchestrator | } 2026-03-10 00:02:21.456762 | orchestrator | 2026-03-10 00:02:21.456766 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-10 00:02:21.456770 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-10 00:02:21.456773 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-10 00:02:21.456777 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-10 00:02:21.456781 | orchestrator | + all_metadata = (known after apply) 2026-03-10 00:02:21.456784 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:21.456788 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:21.456792 | orchestrator | + config_drive = true 2026-03-10 00:02:21.456796 | orchestrator | + created = (known after apply) 2026-03-10 00:02:21.456799 | orchestrator | + flavor_id = (known after apply) 2026-03-10 00:02:21.456803 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-10 00:02:21.456807 | orchestrator | + force_delete = false 2026-03-10 00:02:21.456813 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-10 00:02:21.456817 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.456821 | orchestrator | + image_id = (known after apply) 2026-03-10 00:02:21.456825 | orchestrator | + image_name = (known after apply) 2026-03-10 00:02:21.456828 | orchestrator | + key_pair = "testbed" 2026-03-10 00:02:21.456832 | orchestrator | + name = "testbed-node-5" 2026-03-10 00:02:21.456836 | orchestrator | + power_state = "active" 2026-03-10 00:02:21.456840 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.456843 | orchestrator | + security_groups = (known after apply) 2026-03-10 00:02:21.456847 | orchestrator | + stop_before_destroy = false 2026-03-10 00:02:21.456851 | orchestrator | + updated = (known after apply) 2026-03-10 00:02:21.456855 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-10 00:02:21.456858 | orchestrator | 2026-03-10 00:02:21.456862 | orchestrator | + block_device { 2026-03-10 00:02:21.456866 | orchestrator | + boot_index = 0 2026-03-10 00:02:21.456870 | orchestrator | + delete_on_termination = false 2026-03-10 00:02:21.456873 | orchestrator | + destination_type = "volume" 2026-03-10 00:02:21.456877 | orchestrator | + multiattach = false 2026-03-10 00:02:21.456881 | orchestrator | + source_type = "volume" 2026-03-10 00:02:21.456884 | orchestrator | + uuid = (known after apply) 2026-03-10 00:02:21.456901 | orchestrator | } 2026-03-10 00:02:21.456905 | orchestrator | 2026-03-10 00:02:21.456909 | orchestrator | + network { 2026-03-10 00:02:21.456913 | orchestrator | + access_network = false 2026-03-10 00:02:21.456917 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-10 00:02:21.456920 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-10 00:02:21.456924 | orchestrator | + mac = (known after apply) 2026-03-10 00:02:21.456928 | orchestrator | + name = (known after apply) 2026-03-10 00:02:21.456932 | orchestrator | + port = (known after apply) 2026-03-10 00:02:21.456935 | orchestrator | + uuid = (known after apply) 2026-03-10 00:02:21.456939 | orchestrator | } 2026-03-10 00:02:21.456943 | orchestrator | } 2026-03-10 00:02:21.456946 | orchestrator | 2026-03-10 00:02:21.456950 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-10 00:02:21.456954 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-10 00:02:21.456958 | orchestrator | + fingerprint = (known after apply) 2026-03-10 00:02:21.456961 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.456965 | orchestrator | + name = "testbed" 2026-03-10 00:02:21.456969 | orchestrator | + private_key = (sensitive value) 2026-03-10 00:02:21.456972 | orchestrator | + public_key = (known after apply) 2026-03-10 00:02:21.456976 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.456980 | orchestrator | + user_id = (known after apply) 2026-03-10 00:02:21.456983 | orchestrator | } 2026-03-10 00:02:21.456987 | orchestrator | 2026-03-10 00:02:21.456991 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-10 00:02:21.456995 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-10 00:02:21.457048 | orchestrator | + device = (known after apply) 2026-03-10 00:02:21.457052 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.457055 | orchestrator | + instance_id = (known after apply) 2026-03-10 00:02:21.457059 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.457063 | orchestrator | + volume_id = (known after apply) 2026-03-10 00:02:21.457067 | orchestrator | } 2026-03-10 00:02:21.457070 | orchestrator | 2026-03-10 00:02:21.457074 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-10 00:02:21.457078 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-10 00:02:21.457082 | orchestrator | + device = (known after apply) 2026-03-10 00:02:21.457085 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.457089 | orchestrator | + instance_id = (known after apply) 2026-03-10 00:02:21.457093 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.457097 | orchestrator | + volume_id = (known after apply) 2026-03-10 00:02:21.457100 | orchestrator | } 2026-03-10 00:02:21.457104 | orchestrator | 2026-03-10 00:02:21.457108 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-10 00:02:21.457112 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-10 00:02:21.457115 | orchestrator | + device = (known after apply) 2026-03-10 00:02:21.457119 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.457123 | orchestrator | + instance_id = (known after apply) 2026-03-10 00:02:21.457126 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.457130 | orchestrator | + volume_id = (known after apply) 2026-03-10 00:02:21.457134 | orchestrator | } 2026-03-10 00:02:21.457138 | orchestrator | 2026-03-10 00:02:21.457141 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-10 00:02:21.457145 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-10 00:02:21.457149 | orchestrator | + device = (known after apply) 2026-03-10 00:02:21.457153 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.457157 | orchestrator | + instance_id = (known after apply) 2026-03-10 00:02:21.457160 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.457164 | orchestrator | + volume_id = (known after apply) 2026-03-10 00:02:21.457168 | orchestrator | } 2026-03-10 00:02:21.457172 | orchestrator | 2026-03-10 00:02:21.457175 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-10 00:02:21.457179 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-10 00:02:21.457183 | orchestrator | + device = (known after apply) 2026-03-10 00:02:21.457187 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.457191 | orchestrator | + instance_id = (known after apply) 2026-03-10 00:02:21.457197 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.457204 | orchestrator | + volume_id = (known after apply) 2026-03-10 00:02:21.457208 | orchestrator | } 2026-03-10 00:02:21.457211 | orchestrator | 2026-03-10 00:02:21.457215 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-10 00:02:21.457219 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-10 00:02:21.457223 | orchestrator | + device = (known after apply) 2026-03-10 00:02:21.457227 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.457230 | orchestrator | + instance_id = (known after apply) 2026-03-10 00:02:21.457234 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.457238 | orchestrator | + volume_id = (known after apply) 2026-03-10 00:02:21.457241 | orchestrator | } 2026-03-10 00:02:21.457245 | orchestrator | 2026-03-10 00:02:21.457249 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-10 00:02:21.457253 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-10 00:02:21.457256 | orchestrator | + device = (known after apply) 2026-03-10 00:02:21.457260 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.457264 | orchestrator | + instance_id = (known after apply) 2026-03-10 00:02:21.457268 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.457276 | orchestrator | + volume_id = (known after apply) 2026-03-10 00:02:21.457279 | orchestrator | } 2026-03-10 00:02:21.457283 | orchestrator | 2026-03-10 00:02:21.457287 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-10 00:02:21.457291 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-10 00:02:21.457294 | orchestrator | + device = (known after apply) 2026-03-10 00:02:21.457298 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.457302 | orchestrator | + instance_id = (known after apply) 2026-03-10 00:02:21.457306 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.457309 | orchestrator | + volume_id = (known after apply) 2026-03-10 00:02:21.457313 | orchestrator | } 2026-03-10 00:02:21.457317 | orchestrator | 2026-03-10 00:02:21.457321 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-10 00:02:21.457324 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-10 00:02:21.457328 | orchestrator | + device = (known after apply) 2026-03-10 00:02:21.457332 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.457336 | orchestrator | + instance_id = (known after apply) 2026-03-10 00:02:21.457339 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.457343 | orchestrator | + volume_id = (known after apply) 2026-03-10 00:02:21.457347 | orchestrator | } 2026-03-10 00:02:21.457351 | orchestrator | 2026-03-10 00:02:21.457354 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-10 00:02:21.457359 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-10 00:02:21.457363 | orchestrator | + fixed_ip = (known after apply) 2026-03-10 00:02:21.457366 | orchestrator | + floating_ip = (known after apply) 2026-03-10 00:02:21.457370 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.457374 | orchestrator | + port_id = (known after apply) 2026-03-10 00:02:21.457377 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.457381 | orchestrator | } 2026-03-10 00:02:21.457385 | orchestrator | 2026-03-10 00:02:21.457389 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-10 00:02:21.457392 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-10 00:02:21.457396 | orchestrator | + address = (known after apply) 2026-03-10 00:02:21.457400 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:21.457403 | orchestrator | + dns_domain = (known after apply) 2026-03-10 00:02:21.457407 | orchestrator | + dns_name = (known after apply) 2026-03-10 00:02:21.457411 | orchestrator | + fixed_ip = (known after apply) 2026-03-10 00:02:21.457415 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.457418 | orchestrator | + pool = "public" 2026-03-10 00:02:21.457422 | orchestrator | + port_id = (known after apply) 2026-03-10 00:02:21.457426 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.457429 | orchestrator | + subnet_id = (known after apply) 2026-03-10 00:02:21.457433 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:21.457437 | orchestrator | } 2026-03-10 00:02:21.457441 | orchestrator | 2026-03-10 00:02:21.457444 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-10 00:02:21.457448 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-10 00:02:21.457452 | orchestrator | + admin_state_up = (known after apply) 2026-03-10 00:02:21.457456 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:21.457459 | orchestrator | + availability_zone_hints = [ 2026-03-10 00:02:21.457463 | orchestrator | + "nova", 2026-03-10 00:02:21.457467 | orchestrator | ] 2026-03-10 00:02:21.457471 | orchestrator | + dns_domain = (known after apply) 2026-03-10 00:02:21.457474 | orchestrator | + external = (known after apply) 2026-03-10 00:02:21.457478 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.457482 | orchestrator | + mtu = (known after apply) 2026-03-10 00:02:21.457486 | orchestrator | + name = "net-testbed-management" 2026-03-10 00:02:21.457489 | orchestrator | + port_security_enabled = (known after apply) 2026-03-10 00:02:21.457496 | orchestrator | + qos_policy_id = (known after apply) 2026-03-10 00:02:21.457500 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.457503 | orchestrator | + shared = (known after apply) 2026-03-10 00:02:21.457507 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:21.457511 | orchestrator | + transparent_vlan = (known after apply) 2026-03-10 00:02:21.457515 | orchestrator | 2026-03-10 00:02:21.457519 | orchestrator | + segments (known after apply) 2026-03-10 00:02:21.457522 | orchestrator | } 2026-03-10 00:02:21.457526 | orchestrator | 2026-03-10 00:02:21.457530 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-10 00:02:21.457534 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-10 00:02:21.457537 | orchestrator | + admin_state_up = (known after apply) 2026-03-10 00:02:21.457541 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-10 00:02:21.457545 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-10 00:02:21.457551 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:21.457555 | orchestrator | + device_id = (known after apply) 2026-03-10 00:02:21.457558 | orchestrator | + device_owner = (known after apply) 2026-03-10 00:02:21.457562 | orchestrator | + dns_assignment = (known after apply) 2026-03-10 00:02:21.457566 | orchestrator | + dns_name = (known after apply) 2026-03-10 00:02:21.457573 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.457577 | orchestrator | + mac_address = (known after apply) 2026-03-10 00:02:21.457581 | orchestrator | + network_id = (known after apply) 2026-03-10 00:02:21.457584 | orchestrator | + port_security_enabled = (known after apply) 2026-03-10 00:02:21.457588 | orchestrator | + qos_policy_id = (known after apply) 2026-03-10 00:02:21.457592 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.457596 | orchestrator | + security_group_ids = (known after apply) 2026-03-10 00:02:21.457599 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:21.457603 | orchestrator | 2026-03-10 00:02:21.457607 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:21.457611 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-10 00:02:21.457614 | orchestrator | } 2026-03-10 00:02:21.457618 | orchestrator | 2026-03-10 00:02:21.457622 | orchestrator | + binding (known after apply) 2026-03-10 00:02:21.457626 | orchestrator | 2026-03-10 00:02:21.457629 | orchestrator | + fixed_ip { 2026-03-10 00:02:21.457633 | orchestrator | + ip_address = "192.168.16.5" 2026-03-10 00:02:21.457637 | orchestrator | + subnet_id = (known after apply) 2026-03-10 00:02:21.457641 | orchestrator | } 2026-03-10 00:02:21.457644 | orchestrator | } 2026-03-10 00:02:21.457648 | orchestrator | 2026-03-10 00:02:21.457652 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-10 00:02:21.457656 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-10 00:02:21.457659 | orchestrator | + admin_state_up = (known after apply) 2026-03-10 00:02:21.457663 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-10 00:02:21.457667 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-10 00:02:21.457671 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:21.457674 | orchestrator | + device_id = (known after apply) 2026-03-10 00:02:21.457678 | orchestrator | + device_owner = (known after apply) 2026-03-10 00:02:21.457682 | orchestrator | + dns_assignment = (known after apply) 2026-03-10 00:02:21.457685 | orchestrator | + dns_name = (known after apply) 2026-03-10 00:02:21.457689 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.457693 | orchestrator | + mac_address = (known after apply) 2026-03-10 00:02:21.457696 | orchestrator | + network_id = (known after apply) 2026-03-10 00:02:21.457700 | orchestrator | + port_security_enabled = (known after apply) 2026-03-10 00:02:21.457704 | orchestrator | + qos_policy_id = (known after apply) 2026-03-10 00:02:21.457707 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.457714 | orchestrator | + security_group_ids = (known after apply) 2026-03-10 00:02:21.457718 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:21.457722 | orchestrator | 2026-03-10 00:02:21.457725 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:21.457729 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-10 00:02:21.457733 | orchestrator | } 2026-03-10 00:02:21.457736 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:21.457740 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-10 00:02:21.457744 | orchestrator | } 2026-03-10 00:02:21.457748 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:21.457751 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-10 00:02:21.457755 | orchestrator | } 2026-03-10 00:02:21.457759 | orchestrator | 2026-03-10 00:02:21.457763 | orchestrator | + binding (known after apply) 2026-03-10 00:02:21.457766 | orchestrator | 2026-03-10 00:02:21.457770 | orchestrator | + fixed_ip { 2026-03-10 00:02:21.457774 | orchestrator | + ip_address = "192.168.16.10" 2026-03-10 00:02:21.457777 | orchestrator | + subnet_id = (known after apply) 2026-03-10 00:02:21.457781 | orchestrator | } 2026-03-10 00:02:21.457785 | orchestrator | } 2026-03-10 00:02:21.457789 | orchestrator | 2026-03-10 00:02:21.457792 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-10 00:02:21.457796 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-10 00:02:21.457800 | orchestrator | + admin_state_up = (known after apply) 2026-03-10 00:02:21.457804 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-10 00:02:21.457807 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-10 00:02:21.457811 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:21.457815 | orchestrator | + device_id = (known after apply) 2026-03-10 00:02:21.457819 | orchestrator | + device_owner = (known after apply) 2026-03-10 00:02:21.457822 | orchestrator | + dns_assignment = (known after apply) 2026-03-10 00:02:21.457826 | orchestrator | + dns_name = (known after apply) 2026-03-10 00:02:21.457830 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.457834 | orchestrator | + mac_address = (known after apply) 2026-03-10 00:02:21.457837 | orchestrator | + network_id = (known after apply) 2026-03-10 00:02:21.457841 | orchestrator | + port_security_enabled = (known after apply) 2026-03-10 00:02:21.457845 | orchestrator | + qos_policy_id = (known after apply) 2026-03-10 00:02:21.457849 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.457852 | orchestrator | + security_group_ids = (known after apply) 2026-03-10 00:02:21.457856 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:21.457860 | orchestrator | 2026-03-10 00:02:21.457863 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:21.457867 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-10 00:02:21.457871 | orchestrator | } 2026-03-10 00:02:21.457875 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:21.457878 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-10 00:02:21.457882 | orchestrator | } 2026-03-10 00:02:21.457886 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:21.457902 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-10 00:02:21.457908 | orchestrator | } 2026-03-10 00:02:21.457914 | orchestrator | 2026-03-10 00:02:21.457920 | orchestrator | + binding (known after apply) 2026-03-10 00:02:21.457926 | orchestrator | 2026-03-10 00:02:21.457932 | orchestrator | + fixed_ip { 2026-03-10 00:02:21.457938 | orchestrator | + ip_address = "192.168.16.11" 2026-03-10 00:02:21.457944 | orchestrator | + subnet_id = (known after apply) 2026-03-10 00:02:21.457948 | orchestrator | } 2026-03-10 00:02:21.457951 | orchestrator | } 2026-03-10 00:02:21.457955 | orchestrator | 2026-03-10 00:02:21.457959 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-10 00:02:21.457963 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-10 00:02:21.457966 | orchestrator | + admin_state_up = (known after apply) 2026-03-10 00:02:21.457970 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-10 00:02:21.457974 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-10 00:02:21.457978 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:21.457986 | orchestrator | + device_id = (known after apply) 2026-03-10 00:02:21.457990 | orchestrator | + device_owner = (known after apply) 2026-03-10 00:02:21.457993 | orchestrator | + dns_assignment = (known after apply) 2026-03-10 00:02:21.457997 | orchestrator | + dns_name = (known after apply) 2026-03-10 00:02:21.458004 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.458010 | orchestrator | + mac_address = (known after apply) 2026-03-10 00:02:21.460026 | orchestrator | + network_id = (known after apply) 2026-03-10 00:02:21.460048 | orchestrator | + port_security_enabled = (known after apply) 2026-03-10 00:02:21.460052 | orchestrator | + qos_policy_id = (known after apply) 2026-03-10 00:02:21.460057 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.460061 | orchestrator | + security_group_ids = (known after apply) 2026-03-10 00:02:21.460065 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:21.460069 | orchestrator | 2026-03-10 00:02:21.460073 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:21.460077 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-10 00:02:21.460082 | orchestrator | } 2026-03-10 00:02:21.460086 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:21.460090 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-10 00:02:21.460094 | orchestrator | } 2026-03-10 00:02:21.460097 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:21.460101 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-10 00:02:21.460105 | orchestrator | } 2026-03-10 00:02:21.460109 | orchestrator | 2026-03-10 00:02:21.460113 | orchestrator | + binding (known after apply) 2026-03-10 00:02:21.460116 | orchestrator | 2026-03-10 00:02:21.460120 | orchestrator | + fixed_ip { 2026-03-10 00:02:21.460124 | orchestrator | + ip_address = "192.168.16.12" 2026-03-10 00:02:21.460128 | orchestrator | + subnet_id = (known after apply) 2026-03-10 00:02:21.460132 | orchestrator | } 2026-03-10 00:02:21.460136 | orchestrator | } 2026-03-10 00:02:21.460140 | orchestrator | 2026-03-10 00:02:21.460143 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-10 00:02:21.460148 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-10 00:02:21.460152 | orchestrator | + admin_state_up = (known after apply) 2026-03-10 00:02:21.460156 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-10 00:02:21.460160 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-10 00:02:21.460164 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:21.460167 | orchestrator | + device_id = (known after apply) 2026-03-10 00:02:21.460171 | orchestrator | + device_owner = (known after apply) 2026-03-10 00:02:21.460175 | orchestrator | + dns_assignment = (known after apply) 2026-03-10 00:02:21.460179 | orchestrator | + dns_name = (known after apply) 2026-03-10 00:02:21.460182 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.460186 | orchestrator | + mac_address = (known after apply) 2026-03-10 00:02:21.460190 | orchestrator | + network_id = (known after apply) 2026-03-10 00:02:21.460193 | orchestrator | + port_security_enabled = (known after apply) 2026-03-10 00:02:21.460197 | orchestrator | + qos_policy_id = (known after apply) 2026-03-10 00:02:21.460201 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.460204 | orchestrator | + security_group_ids = (known after apply) 2026-03-10 00:02:21.460208 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:21.460212 | orchestrator | 2026-03-10 00:02:21.460216 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:21.460220 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-10 00:02:21.460224 | orchestrator | } 2026-03-10 00:02:21.460228 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:21.460231 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-10 00:02:21.460235 | orchestrator | } 2026-03-10 00:02:21.460239 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:21.460242 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-10 00:02:21.460246 | orchestrator | } 2026-03-10 00:02:21.460250 | orchestrator | 2026-03-10 00:02:21.460264 | orchestrator | + binding (known after apply) 2026-03-10 00:02:21.460267 | orchestrator | 2026-03-10 00:02:21.460271 | orchestrator | + fixed_ip { 2026-03-10 00:02:21.460275 | orchestrator | + ip_address = "192.168.16.13" 2026-03-10 00:02:21.460279 | orchestrator | + subnet_id = (known after apply) 2026-03-10 00:02:21.460282 | orchestrator | } 2026-03-10 00:02:21.460286 | orchestrator | } 2026-03-10 00:02:21.460290 | orchestrator | 2026-03-10 00:02:21.460294 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-10 00:02:21.460298 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-10 00:02:21.460301 | orchestrator | + admin_state_up = (known after apply) 2026-03-10 00:02:21.460305 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-10 00:02:21.460309 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-10 00:02:21.460313 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:21.460316 | orchestrator | + device_id = (known after apply) 2026-03-10 00:02:21.460320 | orchestrator | + device_owner = (known after apply) 2026-03-10 00:02:21.460324 | orchestrator | + dns_assignment = (known after apply) 2026-03-10 00:02:21.460328 | orchestrator | + dns_name = (known after apply) 2026-03-10 00:02:21.460331 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.460335 | orchestrator | + mac_address = (known after apply) 2026-03-10 00:02:21.460339 | orchestrator | + network_id = (known after apply) 2026-03-10 00:02:21.460343 | orchestrator | + port_security_enabled = (known after apply) 2026-03-10 00:02:21.460346 | orchestrator | + qos_policy_id = (known after apply) 2026-03-10 00:02:21.460350 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.460354 | orchestrator | + security_group_ids = (known after apply) 2026-03-10 00:02:21.460358 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:21.460363 | orchestrator | 2026-03-10 00:02:21.460367 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:21.460371 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-10 00:02:21.460375 | orchestrator | } 2026-03-10 00:02:21.460378 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:21.460382 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-10 00:02:21.460386 | orchestrator | } 2026-03-10 00:02:21.460390 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:21.460394 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-10 00:02:21.460397 | orchestrator | } 2026-03-10 00:02:21.460401 | orchestrator | 2026-03-10 00:02:21.460405 | orchestrator | + binding (known after apply) 2026-03-10 00:02:21.460409 | orchestrator | 2026-03-10 00:02:21.460412 | orchestrator | + fixed_ip { 2026-03-10 00:02:21.460416 | orchestrator | + ip_address = "192.168.16.14" 2026-03-10 00:02:21.460421 | orchestrator | + subnet_id = (known after apply) 2026-03-10 00:02:21.460424 | orchestrator | } 2026-03-10 00:02:21.460428 | orchestrator | } 2026-03-10 00:02:21.460432 | orchestrator | 2026-03-10 00:02:21.460436 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-10 00:02:21.460439 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-10 00:02:21.460443 | orchestrator | + admin_state_up = (known after apply) 2026-03-10 00:02:21.460447 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-10 00:02:21.460451 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-10 00:02:21.460455 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:21.460458 | orchestrator | + device_id = (known after apply) 2026-03-10 00:02:21.460462 | orchestrator | + device_owner = (known after apply) 2026-03-10 00:02:21.460475 | orchestrator | + dns_assignment = (known after apply) 2026-03-10 00:02:21.460480 | orchestrator | + dns_name = (known after apply) 2026-03-10 00:02:21.460483 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.460487 | orchestrator | + mac_address = (known after apply) 2026-03-10 00:02:21.460491 | orchestrator | + network_id = (known after apply) 2026-03-10 00:02:21.460495 | orchestrator | + port_security_enabled = (known after apply) 2026-03-10 00:02:21.460498 | orchestrator | + qos_policy_id = (known after apply) 2026-03-10 00:02:21.460506 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.460509 | orchestrator | + security_group_ids = (known after apply) 2026-03-10 00:02:21.460513 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:21.460517 | orchestrator | 2026-03-10 00:02:21.460521 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:21.460524 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-10 00:02:21.460528 | orchestrator | } 2026-03-10 00:02:21.460532 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:21.460535 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-10 00:02:21.460539 | orchestrator | } 2026-03-10 00:02:21.460543 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:21.460547 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-10 00:02:21.460550 | orchestrator | } 2026-03-10 00:02:21.460554 | orchestrator | 2026-03-10 00:02:21.460564 | orchestrator | + binding (known after apply) 2026-03-10 00:02:21.460568 | orchestrator | 2026-03-10 00:02:21.460572 | orchestrator | + fixed_ip { 2026-03-10 00:02:21.460575 | orchestrator | + ip_address = "192.168.16.15" 2026-03-10 00:02:21.460579 | orchestrator | + subnet_id = (known after apply) 2026-03-10 00:02:21.460583 | orchestrator | } 2026-03-10 00:02:21.460587 | orchestrator | } 2026-03-10 00:02:21.460590 | orchestrator | 2026-03-10 00:02:21.460594 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-10 00:02:21.460598 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-10 00:02:21.460602 | orchestrator | + force_destroy = false 2026-03-10 00:02:21.460606 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.460609 | orchestrator | + port_id = (known after apply) 2026-03-10 00:02:21.460613 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.460617 | orchestrator | + router_id = (known after apply) 2026-03-10 00:02:21.460621 | orchestrator | + subnet_id = (known after apply) 2026-03-10 00:02:21.460624 | orchestrator | } 2026-03-10 00:02:21.460628 | orchestrator | 2026-03-10 00:02:21.460632 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-10 00:02:21.460636 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-10 00:02:21.460639 | orchestrator | + admin_state_up = (known after apply) 2026-03-10 00:02:21.460643 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:21.460647 | orchestrator | + availability_zone_hints = [ 2026-03-10 00:02:21.460651 | orchestrator | + "nova", 2026-03-10 00:02:21.460655 | orchestrator | ] 2026-03-10 00:02:21.460658 | orchestrator | + distributed = (known after apply) 2026-03-10 00:02:21.460662 | orchestrator | + enable_snat = (known after apply) 2026-03-10 00:02:21.460666 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-10 00:02:21.460670 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-10 00:02:21.460673 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.460677 | orchestrator | + name = "testbed" 2026-03-10 00:02:21.460681 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.460685 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:21.460688 | orchestrator | 2026-03-10 00:02:21.460692 | orchestrator | + external_fixed_ip (known after apply) 2026-03-10 00:02:21.460696 | orchestrator | } 2026-03-10 00:02:21.460700 | orchestrator | 2026-03-10 00:02:21.460703 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-10 00:02:21.460708 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-10 00:02:21.460712 | orchestrator | + description = "ssh" 2026-03-10 00:02:21.460716 | orchestrator | + direction = "ingress" 2026-03-10 00:02:21.460719 | orchestrator | + ethertype = "IPv4" 2026-03-10 00:02:21.460723 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.460727 | orchestrator | + port_range_max = 22 2026-03-10 00:02:21.460731 | orchestrator | + port_range_min = 22 2026-03-10 00:02:21.460734 | orchestrator | + protocol = "tcp" 2026-03-10 00:02:21.460738 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.460745 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-10 00:02:21.460749 | orchestrator | + remote_group_id = (known after apply) 2026-03-10 00:02:21.460752 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-10 00:02:21.460756 | orchestrator | + security_group_id = (known after apply) 2026-03-10 00:02:21.460760 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:21.460764 | orchestrator | } 2026-03-10 00:02:21.460767 | orchestrator | 2026-03-10 00:02:21.460771 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-10 00:02:21.460775 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-10 00:02:21.460779 | orchestrator | + description = "wireguard" 2026-03-10 00:02:21.460782 | orchestrator | + direction = "ingress" 2026-03-10 00:02:21.460786 | orchestrator | + ethertype = "IPv4" 2026-03-10 00:02:21.460790 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.460793 | orchestrator | + port_range_max = 51820 2026-03-10 00:02:21.460797 | orchestrator | + port_range_min = 51820 2026-03-10 00:02:21.460801 | orchestrator | + protocol = "udp" 2026-03-10 00:02:21.460805 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.460808 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-10 00:02:21.460812 | orchestrator | + remote_group_id = (known after apply) 2026-03-10 00:02:21.460816 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-10 00:02:21.460819 | orchestrator | + security_group_id = (known after apply) 2026-03-10 00:02:21.460823 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:21.460827 | orchestrator | } 2026-03-10 00:02:21.460831 | orchestrator | 2026-03-10 00:02:21.460835 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-10 00:02:21.460838 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-10 00:02:21.460842 | orchestrator | + direction = "ingress" 2026-03-10 00:02:21.460846 | orchestrator | + ethertype = "IPv4" 2026-03-10 00:02:21.460850 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.460853 | orchestrator | + protocol = "tcp" 2026-03-10 00:02:21.460860 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.460864 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-10 00:02:21.460868 | orchestrator | + remote_group_id = (known after apply) 2026-03-10 00:02:21.460872 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-10 00:02:21.460876 | orchestrator | + security_group_id = (known after apply) 2026-03-10 00:02:21.460879 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:21.460883 | orchestrator | } 2026-03-10 00:02:21.460914 | orchestrator | 2026-03-10 00:02:21.460918 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-10 00:02:21.460922 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-10 00:02:21.460926 | orchestrator | + direction = "ingress" 2026-03-10 00:02:21.460930 | orchestrator | + ethertype = "IPv4" 2026-03-10 00:02:21.460933 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.460937 | orchestrator | + protocol = "udp" 2026-03-10 00:02:21.460941 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.460945 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-10 00:02:21.460948 | orchestrator | + remote_group_id = (known after apply) 2026-03-10 00:02:21.460952 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-10 00:02:21.460956 | orchestrator | + security_group_id = (known after apply) 2026-03-10 00:02:21.460959 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:21.460963 | orchestrator | } 2026-03-10 00:02:21.460967 | orchestrator | 2026-03-10 00:02:21.460971 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-10 00:02:21.460977 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-10 00:02:21.460981 | orchestrator | + direction = "ingress" 2026-03-10 00:02:21.460985 | orchestrator | + ethertype = "IPv4" 2026-03-10 00:02:21.460989 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.460992 | orchestrator | + protocol = "icmp" 2026-03-10 00:02:21.460996 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.461000 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-10 00:02:21.461003 | orchestrator | + remote_group_id = (known after apply) 2026-03-10 00:02:21.461007 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-10 00:02:21.461011 | orchestrator | + security_group_id = (known after apply) 2026-03-10 00:02:21.461015 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:21.461018 | orchestrator | } 2026-03-10 00:02:21.461022 | orchestrator | 2026-03-10 00:02:21.461026 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-10 00:02:21.461029 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-10 00:02:21.461033 | orchestrator | + direction = "ingress" 2026-03-10 00:02:21.461037 | orchestrator | + ethertype = "IPv4" 2026-03-10 00:02:21.461041 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.461044 | orchestrator | + protocol = "tcp" 2026-03-10 00:02:21.461048 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.461052 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-10 00:02:21.461058 | orchestrator | + remote_group_id = (known after apply) 2026-03-10 00:02:21.461062 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-10 00:02:21.461066 | orchestrator | + security_group_id = (known after apply) 2026-03-10 00:02:21.461069 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:21.461073 | orchestrator | } 2026-03-10 00:02:21.461077 | orchestrator | 2026-03-10 00:02:21.461081 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-10 00:02:21.461084 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-10 00:02:21.461088 | orchestrator | + direction = "ingress" 2026-03-10 00:02:21.461092 | orchestrator | + ethertype = "IPv4" 2026-03-10 00:02:21.461095 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.461099 | orchestrator | + protocol = "udp" 2026-03-10 00:02:21.461103 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.461107 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-10 00:02:21.461110 | orchestrator | + remote_group_id = (known after apply) 2026-03-10 00:02:21.461114 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-10 00:02:21.461118 | orchestrator | + security_group_id = (known after apply) 2026-03-10 00:02:21.461121 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:21.461125 | orchestrator | } 2026-03-10 00:02:21.461129 | orchestrator | 2026-03-10 00:02:21.461133 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-10 00:02:21.461136 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-10 00:02:21.461140 | orchestrator | + direction = "ingress" 2026-03-10 00:02:21.461146 | orchestrator | + ethertype = "IPv4" 2026-03-10 00:02:21.461150 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.461153 | orchestrator | + protocol = "icmp" 2026-03-10 00:02:21.461157 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.461161 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-10 00:02:21.461164 | orchestrator | + remote_group_id = (known after apply) 2026-03-10 00:02:21.461168 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-10 00:02:21.461172 | orchestrator | + security_group_id = (known after apply) 2026-03-10 00:02:21.461175 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:21.461182 | orchestrator | } 2026-03-10 00:02:21.461186 | orchestrator | 2026-03-10 00:02:21.461190 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-10 00:02:21.461193 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-10 00:02:21.461197 | orchestrator | + description = "vrrp" 2026-03-10 00:02:21.461201 | orchestrator | + direction = "ingress" 2026-03-10 00:02:21.461205 | orchestrator | + ethertype = "IPv4" 2026-03-10 00:02:21.461208 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.461212 | orchestrator | + protocol = "112" 2026-03-10 00:02:21.461219 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.461223 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-10 00:02:21.461226 | orchestrator | + remote_group_id = (known after apply) 2026-03-10 00:02:21.461230 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-10 00:02:21.461234 | orchestrator | + security_group_id = (known after apply) 2026-03-10 00:02:21.461238 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:21.461241 | orchestrator | } 2026-03-10 00:02:21.461245 | orchestrator | 2026-03-10 00:02:21.461249 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-10 00:02:21.461253 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-10 00:02:21.461257 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:21.461260 | orchestrator | + description = "management security group" 2026-03-10 00:02:21.461264 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.461268 | orchestrator | + name = "testbed-management" 2026-03-10 00:02:21.461271 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.461275 | orchestrator | + stateful = (known after apply) 2026-03-10 00:02:21.461279 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:21.461282 | orchestrator | } 2026-03-10 00:02:21.461286 | orchestrator | 2026-03-10 00:02:21.461290 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-10 00:02:21.461294 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-10 00:02:21.461297 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:21.461301 | orchestrator | + description = "node security group" 2026-03-10 00:02:21.461305 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.461308 | orchestrator | + name = "testbed-node" 2026-03-10 00:02:21.461312 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.461316 | orchestrator | + stateful = (known after apply) 2026-03-10 00:02:21.461320 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:21.461323 | orchestrator | } 2026-03-10 00:02:21.461327 | orchestrator | 2026-03-10 00:02:21.461331 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-10 00:02:21.461334 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-10 00:02:21.461338 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:21.461342 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-10 00:02:21.461346 | orchestrator | + dns_nameservers = [ 2026-03-10 00:02:21.461350 | orchestrator | + "8.8.8.8", 2026-03-10 00:02:21.461353 | orchestrator | + "9.9.9.9", 2026-03-10 00:02:21.461357 | orchestrator | ] 2026-03-10 00:02:21.461361 | orchestrator | + enable_dhcp = true 2026-03-10 00:02:21.461365 | orchestrator | + gateway_ip = (known after apply) 2026-03-10 00:02:21.461368 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.461372 | orchestrator | + ip_version = 4 2026-03-10 00:02:21.461376 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-10 00:02:21.461380 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-10 00:02:21.461383 | orchestrator | + name = "subnet-testbed-management" 2026-03-10 00:02:21.461387 | orchestrator | + network_id = (known after apply) 2026-03-10 00:02:21.461391 | orchestrator | + no_gateway = false 2026-03-10 00:02:21.461395 | orchestrator | + region = (known after apply) 2026-03-10 00:02:21.461398 | orchestrator | + service_types = (known after apply) 2026-03-10 00:02:21.461406 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:21.461409 | orchestrator | 2026-03-10 00:02:21.461413 | orchestrator | + allocation_pool { 2026-03-10 00:02:21.461417 | orchestrator | + end = "192.168.31.250" 2026-03-10 00:02:21.461421 | orchestrator | + start = "192.168.31.200" 2026-03-10 00:02:21.461424 | orchestrator | } 2026-03-10 00:02:21.461428 | orchestrator | } 2026-03-10 00:02:21.461432 | orchestrator | 2026-03-10 00:02:21.461436 | orchestrator | # terraform_data.image will be created 2026-03-10 00:02:21.461439 | orchestrator | + resource "terraform_data" "image" { 2026-03-10 00:02:21.461443 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.461447 | orchestrator | + input = "Ubuntu 24.04" 2026-03-10 00:02:21.461450 | orchestrator | + output = (known after apply) 2026-03-10 00:02:21.461454 | orchestrator | } 2026-03-10 00:02:21.461458 | orchestrator | 2026-03-10 00:02:21.461462 | orchestrator | # terraform_data.image_node will be created 2026-03-10 00:02:21.461465 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-10 00:02:21.461469 | orchestrator | + id = (known after apply) 2026-03-10 00:02:21.461473 | orchestrator | + input = "Ubuntu 24.04" 2026-03-10 00:02:21.461476 | orchestrator | + output = (known after apply) 2026-03-10 00:02:21.461480 | orchestrator | } 2026-03-10 00:02:21.461484 | orchestrator | 2026-03-10 00:02:21.461488 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-10 00:02:21.461491 | orchestrator | 2026-03-10 00:02:21.461495 | orchestrator | Changes to Outputs: 2026-03-10 00:02:21.461499 | orchestrator | + manager_address = (sensitive value) 2026-03-10 00:02:21.461503 | orchestrator | + private_key = (sensitive value) 2026-03-10 00:02:21.684679 | orchestrator | terraform_data.image_node: Creating... 2026-03-10 00:02:21.686879 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=def8f7c4-4d26-b863-e4fc-bc3c6c0205e6] 2026-03-10 00:02:21.687399 | orchestrator | terraform_data.image: Creating... 2026-03-10 00:02:21.690209 | orchestrator | terraform_data.image: Creation complete after 0s [id=8b5b834d-4470-378f-d39f-6faf8e0640d9] 2026-03-10 00:02:21.710945 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-10 00:02:21.740721 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-10 00:02:21.746961 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-10 00:02:21.747011 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-10 00:02:21.747019 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-10 00:02:21.747261 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-10 00:02:21.749024 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-10 00:02:21.755384 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-10 00:02:21.758145 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-10 00:02:21.758355 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-10 00:02:22.218954 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-10 00:02:22.226633 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-10 00:02:22.272569 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-03-10 00:02:22.277256 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-10 00:02:22.943033 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=e14686e6-83cf-4601-a1fa-915973290008] 2026-03-10 00:02:22.952932 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-10 00:02:23.013297 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-10 00:02:23.021831 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-10 00:02:25.381955 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=a252bbef-4467-4af4-a387-4994b1c9e49a] 2026-03-10 00:02:25.390522 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-10 00:02:25.393586 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=0741fd77379670924498ccd7e88af8e2d2ac3990] 2026-03-10 00:02:25.393837 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=1d3a34ea-f16d-4f10-8269-5937a58b6a14] 2026-03-10 00:02:25.398998 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-10 00:02:25.405737 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=f86d111d-1a96-4282-a6fb-aea85f8e4c5d] 2026-03-10 00:02:25.409248 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-10 00:02:25.411053 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-10 00:02:25.413874 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=5679304f0a7ab2599eb712b561cd40006b3e39a7] 2026-03-10 00:02:25.420686 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-10 00:02:25.420928 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=b7d8aa34-d63a-4976-a853-b9d2680122e0] 2026-03-10 00:02:25.428670 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-10 00:02:25.446935 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=497bc817-8b42-47c9-935c-36bd3332f08b] 2026-03-10 00:02:25.447024 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=1827d390-92d5-42dc-b1df-e99337d10b88] 2026-03-10 00:02:25.462431 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-10 00:02:25.467599 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-10 00:02:25.470729 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=0c217fde-a42a-4606-a0be-96745b6d50a1] 2026-03-10 00:02:25.475658 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-10 00:02:25.518000 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=fbc5b701-e3a2-4a57-9c09-bea5a2018a77] 2026-03-10 00:02:25.519482 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=01fdf314-9dac-4cf9-86b2-8624031a3730] 2026-03-10 00:02:26.377261 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=d95eb3d2-1785-4008-b5c1-1beac13693f8] 2026-03-10 00:02:26.758740 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 2s [id=a341a143-7f5c-4656-83a5-6fdb0d84dce1] 2026-03-10 00:02:26.763438 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-10 00:02:28.798123 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=25d57d94-2695-4aa5-876f-38a57276d3cf] 2026-03-10 00:02:28.826787 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=b9ca93df-9930-4281-85b9-8a08fee9dbfb] 2026-03-10 00:02:28.831506 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=513ba897-4681-4617-82d1-e2531ece3de8] 2026-03-10 00:02:28.847848 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=1c2aff1f-fd64-4855-be93-56f20738751e] 2026-03-10 00:02:29.308581 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=d1565e10-ffa2-449b-a353-9f25db04eeea] 2026-03-10 00:02:29.366750 | orchestrator | openstack_networking_router_v2.router: Creation complete after 2s [id=9cd67c7f-5993-4f28-a8d7-6f86017da923] 2026-03-10 00:02:29.371757 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-10 00:02:29.373653 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-10 00:02:29.375584 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-10 00:02:29.393081 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=aeb2096c-114a-4afe-90ec-6b353b021499] 2026-03-10 00:02:29.623032 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=943ef2e2-82e6-499e-a16e-e9dc1fc624c3] 2026-03-10 00:02:29.671719 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-10 00:02:29.671762 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-10 00:02:29.671768 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-10 00:02:29.671787 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-10 00:02:29.671792 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-10 00:02:29.671796 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-10 00:02:29.695963 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=50cd3869-1462-47b2-9dad-20e65cf96eb5] 2026-03-10 00:02:29.701023 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-10 00:02:29.710106 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-10 00:02:29.714792 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-10 00:02:29.865430 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=b6f82cbc-ef6b-41cb-ad5c-850357d615a9] 2026-03-10 00:02:29.871280 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-10 00:02:30.144234 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=0e4b7118-96ba-4903-8c0d-9dbc230e0a6d] 2026-03-10 00:02:30.150078 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-10 00:02:30.162376 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=e68289d8-b7af-4ea8-8078-bffc2b49711c] 2026-03-10 00:02:30.183767 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-10 00:02:30.465855 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=b0365b44-7e9e-4e94-a1f8-96864b9b6b47] 2026-03-10 00:02:30.474351 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-10 00:02:30.574383 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=f28b2e36-39f0-4504-8a80-1b57b684e86c] 2026-03-10 00:02:30.574619 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=e17fd9b4-65b7-4df0-904c-600ecc42cb62] 2026-03-10 00:02:30.589810 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-10 00:02:30.592570 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-10 00:02:30.649069 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=16f81665-e450-4496-8e69-c3566e0ec7ef] 2026-03-10 00:02:30.662787 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-10 00:02:30.907234 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=2730a0c9-e421-45a3-ba53-5b28f824b30f] 2026-03-10 00:02:31.072731 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=3a19f489-8fa9-4e0f-9295-4877b54b3ac8] 2026-03-10 00:02:31.073581 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=68cdeef7-7d12-4709-b01a-ff2f31c24165] 2026-03-10 00:02:31.120031 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=3ff82aad-e6dd-4d74-9d27-50799f462b42] 2026-03-10 00:02:31.313603 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=43310fc7-0c59-42d8-85ca-b5d8c300c62d] 2026-03-10 00:02:31.423275 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 0s [id=ab3f2fe6-5f12-4c12-af15-54c1af192c63] 2026-03-10 00:02:31.586717 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 2s [id=1ef34043-afef-4f78-b778-c7ee0a753dc8] 2026-03-10 00:02:31.765365 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=d210133a-92b2-401e-b484-690c9a596ec9] 2026-03-10 00:02:31.935434 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=8a875291-7d6d-49e7-a837-1bae508bd93d] 2026-03-10 00:02:32.699721 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 4s [id=4b284edf-bd1f-433f-bc4d-4cea200aa6fd] 2026-03-10 00:02:32.720532 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-10 00:02:32.720961 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-10 00:02:32.738297 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-10 00:02:32.739140 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-10 00:02:32.740069 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-10 00:02:32.770111 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-10 00:02:32.771464 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-10 00:02:34.899777 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=ecacc0d3-20b3-40de-9e2d-d6196ba47151] 2026-03-10 00:02:36.161134 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-10 00:02:36.161193 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-10 00:02:36.161238 | orchestrator | local_file.inventory: Creating... 2026-03-10 00:02:36.161251 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 1s [id=ab1824fdb9f2af98a104a29c4c9a1b814e9a5abe] 2026-03-10 00:02:36.161264 | orchestrator | local_file.inventory: Creation complete after 1s [id=308c08e0d4625175c335ab3bc562cf23a82f1a1b] 2026-03-10 00:02:36.164610 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=ecacc0d3-20b3-40de-9e2d-d6196ba47151] 2026-03-10 00:02:42.724555 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-10 00:02:42.741012 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-10 00:02:42.741089 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-10 00:02:42.741104 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-10 00:02:42.773634 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-03-10 00:02:42.773724 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-10 00:02:52.733699 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-10 00:02:52.742101 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-10 00:02:52.742237 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-10 00:02:52.742254 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-10 00:02:52.774679 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-10 00:02:52.774807 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-03-10 00:02:53.636747 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=dc675fd4-61ff-45a4-93aa-dff5dc9c3c21] 2026-03-10 00:02:53.843277 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=40becada-2350-428d-ab3c-2002f0bb310f] 2026-03-10 00:03:02.738431 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-03-10 00:03:02.742802 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-03-10 00:03:02.742867 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-03-10 00:03:02.742876 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-03-10 00:03:03.504861 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=db4d08c8-2572-4732-a605-df0f146ad1bb] 2026-03-10 00:03:03.520540 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=7b7ef3e4-1757-4399-aa80-036d37c7d74d] 2026-03-10 00:03:03.839906 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=5f2e0739-e245-41c4-8a2f-151b854ca8b7] 2026-03-10 00:03:04.140544 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=aa887cd9-23c8-4d0d-9df1-319e1ce0611e] 2026-03-10 00:03:04.160435 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-10 00:03:04.165346 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=7338571869568584449] 2026-03-10 00:03:04.170646 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-10 00:03:04.172752 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-10 00:03:04.178907 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-10 00:03:04.187754 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-10 00:03:04.189444 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-10 00:03:04.189485 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-10 00:03:04.189491 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-10 00:03:04.189964 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-10 00:03:04.191261 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-10 00:03:04.196162 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-10 00:03:07.594661 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=dc675fd4-61ff-45a4-93aa-dff5dc9c3c21/b7d8aa34-d63a-4976-a853-b9d2680122e0] 2026-03-10 00:03:07.600012 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=7b7ef3e4-1757-4399-aa80-036d37c7d74d/0c217fde-a42a-4606-a0be-96745b6d50a1] 2026-03-10 00:03:07.626150 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=db4d08c8-2572-4732-a605-df0f146ad1bb/1827d390-92d5-42dc-b1df-e99337d10b88] 2026-03-10 00:03:07.633638 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=7b7ef3e4-1757-4399-aa80-036d37c7d74d/f86d111d-1a96-4282-a6fb-aea85f8e4c5d] 2026-03-10 00:03:07.652043 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=dc675fd4-61ff-45a4-93aa-dff5dc9c3c21/497bc817-8b42-47c9-935c-36bd3332f08b] 2026-03-10 00:03:07.669379 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=db4d08c8-2572-4732-a605-df0f146ad1bb/01fdf314-9dac-4cf9-86b2-8624031a3730] 2026-03-10 00:03:13.746203 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=dc675fd4-61ff-45a4-93aa-dff5dc9c3c21/1d3a34ea-f16d-4f10-8269-5937a58b6a14] 2026-03-10 00:03:13.756123 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=7b7ef3e4-1757-4399-aa80-036d37c7d74d/a252bbef-4467-4af4-a387-4994b1c9e49a] 2026-03-10 00:03:13.781238 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=db4d08c8-2572-4732-a605-df0f146ad1bb/fbc5b701-e3a2-4a57-9c09-bea5a2018a77] 2026-03-10 00:03:14.200775 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-10 00:03:24.210180 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-10 00:03:24.769889 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=969eb7b9-182b-4a84-b83f-c5d0941f4524] 2026-03-10 00:03:24.786663 | orchestrator | 2026-03-10 00:03:24.786751 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-10 00:03:24.786766 | orchestrator | 2026-03-10 00:03:24.786778 | orchestrator | Outputs: 2026-03-10 00:03:24.786790 | orchestrator | 2026-03-10 00:03:24.786812 | orchestrator | manager_address = 2026-03-10 00:03:24.786825 | orchestrator | private_key = 2026-03-10 00:03:25.210537 | orchestrator | ok: Runtime: 0:01:11.938081 2026-03-10 00:03:25.246021 | 2026-03-10 00:03:25.246215 | TASK [Fetch manager address] 2026-03-10 00:03:25.699178 | orchestrator | ok 2026-03-10 00:03:25.714800 | 2026-03-10 00:03:25.714945 | TASK [Set manager_host address] 2026-03-10 00:03:25.794938 | orchestrator | ok 2026-03-10 00:03:25.804555 | 2026-03-10 00:03:25.804668 | LOOP [Update ansible collections] 2026-03-10 00:03:26.681759 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-10 00:03:26.682116 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-10 00:03:26.682173 | orchestrator | Starting galaxy collection install process 2026-03-10 00:03:26.682221 | orchestrator | Process install dependency map 2026-03-10 00:03:26.682345 | orchestrator | Starting collection install process 2026-03-10 00:03:26.682382 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2026-03-10 00:03:26.682418 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2026-03-10 00:03:26.682458 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-10 00:03:26.682522 | orchestrator | ok: Item: commons Runtime: 0:00:00.542097 2026-03-10 00:03:27.586961 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-10 00:03:27.587092 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-10 00:03:27.587142 | orchestrator | Starting galaxy collection install process 2026-03-10 00:03:27.587182 | orchestrator | Process install dependency map 2026-03-10 00:03:27.587219 | orchestrator | Starting collection install process 2026-03-10 00:03:27.587299 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2026-03-10 00:03:27.587332 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2026-03-10 00:03:27.587363 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-10 00:03:27.587415 | orchestrator | ok: Item: services Runtime: 0:00:00.638224 2026-03-10 00:03:27.608142 | 2026-03-10 00:03:27.608321 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-10 00:03:40.171263 | orchestrator | ok 2026-03-10 00:03:40.180643 | 2026-03-10 00:03:40.180759 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-10 00:04:40.220926 | orchestrator | ok 2026-03-10 00:04:40.228348 | 2026-03-10 00:04:40.228458 | TASK [Fetch manager ssh hostkey] 2026-03-10 00:04:41.803332 | orchestrator | Output suppressed because no_log was given 2026-03-10 00:04:41.819882 | 2026-03-10 00:04:41.820075 | TASK [Get ssh keypair from terraform environment] 2026-03-10 00:04:42.357338 | orchestrator | ok: Runtime: 0:00:00.007556 2026-03-10 00:04:42.377964 | 2026-03-10 00:04:42.378144 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-10 00:04:42.422518 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-10 00:04:42.431660 | 2026-03-10 00:04:42.431784 | TASK [Run manager part 0] 2026-03-10 00:04:43.268239 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-10 00:04:43.310190 | orchestrator | 2026-03-10 00:04:43.310229 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-10 00:04:43.310236 | orchestrator | 2026-03-10 00:04:43.310248 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-10 00:04:45.241307 | orchestrator | ok: [testbed-manager] 2026-03-10 00:04:45.241379 | orchestrator | 2026-03-10 00:04:45.241439 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-10 00:04:45.241463 | orchestrator | 2026-03-10 00:04:45.241479 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-10 00:04:47.129213 | orchestrator | ok: [testbed-manager] 2026-03-10 00:04:47.129279 | orchestrator | 2026-03-10 00:04:47.129287 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-10 00:04:47.797482 | orchestrator | ok: [testbed-manager] 2026-03-10 00:04:47.797528 | orchestrator | 2026-03-10 00:04:47.797535 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-10 00:04:47.843421 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:04:47.843455 | orchestrator | 2026-03-10 00:04:47.843464 | orchestrator | TASK [Update package cache] **************************************************** 2026-03-10 00:04:47.868075 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:04:47.868103 | orchestrator | 2026-03-10 00:04:47.868109 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-10 00:04:47.907098 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:04:47.907131 | orchestrator | 2026-03-10 00:04:47.907137 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-10 00:04:47.935178 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:04:47.935203 | orchestrator | 2026-03-10 00:04:47.935208 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-10 00:04:47.967500 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:04:47.967535 | orchestrator | 2026-03-10 00:04:47.967543 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-10 00:04:48.001906 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:04:48.001935 | orchestrator | 2026-03-10 00:04:48.001942 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-10 00:04:48.026645 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:04:48.026663 | orchestrator | 2026-03-10 00:04:48.026668 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-10 00:04:48.783403 | orchestrator | changed: [testbed-manager] 2026-03-10 00:04:48.783512 | orchestrator | 2026-03-10 00:04:48.783525 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-10 00:07:49.113595 | orchestrator | changed: [testbed-manager] 2026-03-10 00:07:49.113762 | orchestrator | 2026-03-10 00:07:49.113796 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-10 00:09:14.020852 | orchestrator | changed: [testbed-manager] 2026-03-10 00:09:14.020900 | orchestrator | 2026-03-10 00:09:14.020909 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-10 00:09:39.411595 | orchestrator | changed: [testbed-manager] 2026-03-10 00:09:39.411645 | orchestrator | 2026-03-10 00:09:39.411656 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-10 00:09:49.299558 | orchestrator | changed: [testbed-manager] 2026-03-10 00:09:49.299643 | orchestrator | 2026-03-10 00:09:49.299658 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-10 00:09:49.353036 | orchestrator | ok: [testbed-manager] 2026-03-10 00:09:49.353159 | orchestrator | 2026-03-10 00:09:49.353185 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-10 00:09:50.160866 | orchestrator | ok: [testbed-manager] 2026-03-10 00:09:50.160932 | orchestrator | 2026-03-10 00:09:50.160943 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-10 00:09:50.934550 | orchestrator | changed: [testbed-manager] 2026-03-10 00:09:50.934623 | orchestrator | 2026-03-10 00:09:50.934635 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-10 00:09:57.533550 | orchestrator | changed: [testbed-manager] 2026-03-10 00:09:57.533595 | orchestrator | 2026-03-10 00:09:57.533622 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-10 00:10:03.889151 | orchestrator | changed: [testbed-manager] 2026-03-10 00:10:03.889251 | orchestrator | 2026-03-10 00:10:03.889272 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-10 00:10:06.587213 | orchestrator | changed: [testbed-manager] 2026-03-10 00:10:06.587244 | orchestrator | 2026-03-10 00:10:06.587251 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-10 00:10:08.213412 | orchestrator | changed: [testbed-manager] 2026-03-10 00:10:08.213487 | orchestrator | 2026-03-10 00:10:08.213503 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-10 00:10:09.290179 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-10 00:10:09.290218 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-10 00:10:09.290227 | orchestrator | 2026-03-10 00:10:09.290235 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-10 00:10:09.332513 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-10 00:10:09.332580 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-10 00:10:09.332594 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-10 00:10:09.332605 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-10 00:10:12.354408 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-10 00:10:12.354495 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-10 00:10:12.354515 | orchestrator | 2026-03-10 00:10:12.354530 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-10 00:10:12.953630 | orchestrator | changed: [testbed-manager] 2026-03-10 00:10:12.953705 | orchestrator | 2026-03-10 00:10:12.953717 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-10 00:11:34.814187 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-10 00:11:34.814241 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-10 00:11:34.814252 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-10 00:11:34.814260 | orchestrator | 2026-03-10 00:11:34.814269 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-10 00:11:37.225924 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-10 00:11:37.225963 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-10 00:11:37.225969 | orchestrator | 2026-03-10 00:11:37.225974 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-10 00:11:37.225979 | orchestrator | 2026-03-10 00:11:37.225984 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-10 00:11:38.677962 | orchestrator | ok: [testbed-manager] 2026-03-10 00:11:38.678107 | orchestrator | 2026-03-10 00:11:38.678128 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-10 00:11:38.728912 | orchestrator | ok: [testbed-manager] 2026-03-10 00:11:38.728992 | orchestrator | 2026-03-10 00:11:38.729008 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-10 00:11:38.800255 | orchestrator | ok: [testbed-manager] 2026-03-10 00:11:38.800341 | orchestrator | 2026-03-10 00:11:38.800356 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-10 00:11:39.613983 | orchestrator | changed: [testbed-manager] 2026-03-10 00:11:39.614069 | orchestrator | 2026-03-10 00:11:39.614080 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-10 00:11:40.389543 | orchestrator | changed: [testbed-manager] 2026-03-10 00:11:40.389588 | orchestrator | 2026-03-10 00:11:40.389596 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-10 00:11:41.821834 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-10 00:11:41.821877 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-10 00:11:41.821885 | orchestrator | 2026-03-10 00:11:41.821901 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-10 00:11:43.266892 | orchestrator | changed: [testbed-manager] 2026-03-10 00:11:43.267019 | orchestrator | 2026-03-10 00:11:43.267092 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-10 00:11:45.125296 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-10 00:11:45.125369 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-10 00:11:45.125379 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-10 00:11:45.125386 | orchestrator | 2026-03-10 00:11:45.125396 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-10 00:11:45.321895 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:11:45.321959 | orchestrator | 2026-03-10 00:11:45.321970 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-10 00:11:45.321980 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:11:45.321988 | orchestrator | 2026-03-10 00:11:45.321998 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-10 00:11:45.835942 | orchestrator | changed: [testbed-manager] 2026-03-10 00:11:45.836033 | orchestrator | 2026-03-10 00:11:45.836079 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-10 00:11:45.909721 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:11:45.909774 | orchestrator | 2026-03-10 00:11:45.909780 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-10 00:11:46.775083 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-10 00:11:46.775342 | orchestrator | changed: [testbed-manager] 2026-03-10 00:11:46.775368 | orchestrator | 2026-03-10 00:11:46.775386 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-10 00:11:46.811843 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:11:46.811923 | orchestrator | 2026-03-10 00:11:46.811939 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-10 00:11:46.853025 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:11:46.853120 | orchestrator | 2026-03-10 00:11:46.853132 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-10 00:11:46.889932 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:11:46.889999 | orchestrator | 2026-03-10 00:11:46.890010 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-10 00:11:46.961009 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:11:46.961115 | orchestrator | 2026-03-10 00:11:46.961129 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-10 00:11:47.678453 | orchestrator | ok: [testbed-manager] 2026-03-10 00:11:47.678517 | orchestrator | 2026-03-10 00:11:47.678528 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-10 00:11:47.678538 | orchestrator | 2026-03-10 00:11:47.678547 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-10 00:11:49.318693 | orchestrator | ok: [testbed-manager] 2026-03-10 00:11:49.318780 | orchestrator | 2026-03-10 00:11:49.318797 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-10 00:11:50.397749 | orchestrator | changed: [testbed-manager] 2026-03-10 00:11:50.397829 | orchestrator | 2026-03-10 00:11:50.397853 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:11:50.397867 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-03-10 00:11:50.397879 | orchestrator | 2026-03-10 00:11:50.744943 | orchestrator | ok: Runtime: 0:07:07.789343 2026-03-10 00:11:50.762758 | 2026-03-10 00:11:50.762926 | TASK [Point out that the log in on the manager is now possible] 2026-03-10 00:11:50.793895 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-10 00:11:50.800813 | 2026-03-10 00:11:50.800919 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-10 00:11:50.846985 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-10 00:11:50.856341 | 2026-03-10 00:11:50.856594 | TASK [Run manager part 1 + 2] 2026-03-10 00:11:51.732814 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-10 00:11:51.790153 | orchestrator | 2026-03-10 00:11:51.790242 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-10 00:11:51.790259 | orchestrator | 2026-03-10 00:11:51.790287 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-10 00:11:55.046061 | orchestrator | ok: [testbed-manager] 2026-03-10 00:11:55.046158 | orchestrator | 2026-03-10 00:11:55.046209 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-10 00:11:55.090577 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:11:55.090633 | orchestrator | 2026-03-10 00:11:55.090642 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-10 00:11:55.136623 | orchestrator | ok: [testbed-manager] 2026-03-10 00:11:55.136680 | orchestrator | 2026-03-10 00:11:55.136688 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-10 00:11:55.181799 | orchestrator | ok: [testbed-manager] 2026-03-10 00:11:55.181894 | orchestrator | 2026-03-10 00:11:55.181913 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-10 00:11:55.258198 | orchestrator | ok: [testbed-manager] 2026-03-10 00:11:55.258254 | orchestrator | 2026-03-10 00:11:55.258261 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-10 00:11:55.325225 | orchestrator | ok: [testbed-manager] 2026-03-10 00:11:55.325314 | orchestrator | 2026-03-10 00:11:55.325331 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-10 00:11:55.381987 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-10 00:11:55.382124 | orchestrator | 2026-03-10 00:11:55.382140 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-10 00:11:56.248533 | orchestrator | ok: [testbed-manager] 2026-03-10 00:11:56.248692 | orchestrator | 2026-03-10 00:11:56.248711 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-10 00:11:56.292199 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:11:56.292252 | orchestrator | 2026-03-10 00:11:56.292259 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-10 00:11:57.764696 | orchestrator | changed: [testbed-manager] 2026-03-10 00:11:57.764756 | orchestrator | 2026-03-10 00:11:57.764765 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-10 00:11:58.390656 | orchestrator | ok: [testbed-manager] 2026-03-10 00:11:58.390713 | orchestrator | 2026-03-10 00:11:58.390722 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-10 00:11:59.643734 | orchestrator | changed: [testbed-manager] 2026-03-10 00:11:59.643831 | orchestrator | 2026-03-10 00:11:59.643850 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-10 00:12:16.010385 | orchestrator | changed: [testbed-manager] 2026-03-10 00:12:16.010602 | orchestrator | 2026-03-10 00:12:16.010633 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-10 00:12:16.716366 | orchestrator | ok: [testbed-manager] 2026-03-10 00:12:16.716460 | orchestrator | 2026-03-10 00:12:16.716479 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-10 00:12:16.772674 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:12:16.772762 | orchestrator | 2026-03-10 00:12:16.772779 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-10 00:12:17.803421 | orchestrator | changed: [testbed-manager] 2026-03-10 00:12:17.803506 | orchestrator | 2026-03-10 00:12:17.803522 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-10 00:12:18.847649 | orchestrator | changed: [testbed-manager] 2026-03-10 00:12:18.847711 | orchestrator | 2026-03-10 00:12:18.847725 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-10 00:12:19.473239 | orchestrator | changed: [testbed-manager] 2026-03-10 00:12:19.473304 | orchestrator | 2026-03-10 00:12:19.473320 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-10 00:12:19.515826 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-10 00:12:19.515889 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-10 00:12:19.515896 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-10 00:12:19.515901 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-10 00:12:21.598670 | orchestrator | changed: [testbed-manager] 2026-03-10 00:12:21.598719 | orchestrator | 2026-03-10 00:12:21.598727 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-10 00:12:31.145618 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-10 00:12:31.145687 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-10 00:12:31.145704 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-10 00:12:31.145716 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-10 00:12:31.145735 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-10 00:12:31.145746 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-10 00:12:31.145757 | orchestrator | 2026-03-10 00:12:31.145771 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-10 00:12:32.234325 | orchestrator | changed: [testbed-manager] 2026-03-10 00:12:32.234413 | orchestrator | 2026-03-10 00:12:32.234430 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-03-10 00:12:32.283113 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:12:32.283185 | orchestrator | 2026-03-10 00:12:32.283196 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-10 00:12:35.505366 | orchestrator | changed: [testbed-manager] 2026-03-10 00:12:35.505447 | orchestrator | 2026-03-10 00:12:35.505467 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-10 00:12:35.542681 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:12:35.542772 | orchestrator | 2026-03-10 00:12:35.542789 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-10 00:14:22.305271 | orchestrator | changed: [testbed-manager] 2026-03-10 00:14:22.305308 | orchestrator | 2026-03-10 00:14:22.305316 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-10 00:14:23.514623 | orchestrator | ok: [testbed-manager] 2026-03-10 00:14:23.514696 | orchestrator | 2026-03-10 00:14:23.514713 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:14:23.514725 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-03-10 00:14:23.514736 | orchestrator | 2026-03-10 00:14:23.986055 | orchestrator | ok: Runtime: 0:02:32.468637 2026-03-10 00:14:24.004277 | 2026-03-10 00:14:24.004423 | TASK [Reboot manager] 2026-03-10 00:14:25.542152 | orchestrator | ok: Runtime: 0:00:00.983545 2026-03-10 00:14:25.558509 | 2026-03-10 00:14:25.558674 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-10 00:14:42.560478 | orchestrator | ok 2026-03-10 00:14:42.568924 | 2026-03-10 00:14:42.569042 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-10 00:15:42.625509 | orchestrator | ok 2026-03-10 00:15:42.636191 | 2026-03-10 00:15:42.636341 | TASK [Deploy manager + bootstrap nodes] 2026-03-10 00:15:45.353883 | orchestrator | 2026-03-10 00:15:45.353993 | orchestrator | # DEPLOY MANAGER 2026-03-10 00:15:45.354004 | orchestrator | 2026-03-10 00:15:45.354053 | orchestrator | + set -e 2026-03-10 00:15:45.354059 | orchestrator | + echo 2026-03-10 00:15:45.354065 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-10 00:15:45.354072 | orchestrator | + echo 2026-03-10 00:15:45.354090 | orchestrator | + cat /opt/manager-vars.sh 2026-03-10 00:15:45.358443 | orchestrator | export NUMBER_OF_NODES=6 2026-03-10 00:15:45.358490 | orchestrator | 2026-03-10 00:15:45.358497 | orchestrator | export CEPH_VERSION=reef 2026-03-10 00:15:45.358505 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-10 00:15:45.358513 | orchestrator | export MANAGER_VERSION=9.5.0 2026-03-10 00:15:45.358527 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-03-10 00:15:45.358531 | orchestrator | 2026-03-10 00:15:45.358542 | orchestrator | export ARA=false 2026-03-10 00:15:45.358549 | orchestrator | export DEPLOY_MODE=manager 2026-03-10 00:15:45.358559 | orchestrator | export TEMPEST=true 2026-03-10 00:15:45.358566 | orchestrator | export IS_ZUUL=true 2026-03-10 00:15:45.358572 | orchestrator | 2026-03-10 00:15:45.358583 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.64 2026-03-10 00:15:45.358590 | orchestrator | export EXTERNAL_API=false 2026-03-10 00:15:45.358597 | orchestrator | 2026-03-10 00:15:45.358603 | orchestrator | export IMAGE_USER=ubuntu 2026-03-10 00:15:45.358612 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-10 00:15:45.358618 | orchestrator | 2026-03-10 00:15:45.358623 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-10 00:15:45.358872 | orchestrator | 2026-03-10 00:15:45.358885 | orchestrator | + echo 2026-03-10 00:15:45.358891 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-10 00:15:45.360294 | orchestrator | ++ export INTERACTIVE=false 2026-03-10 00:15:45.360370 | orchestrator | ++ INTERACTIVE=false 2026-03-10 00:15:45.360387 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-10 00:15:45.360403 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-10 00:15:45.360969 | orchestrator | + source /opt/manager-vars.sh 2026-03-10 00:15:45.361013 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-10 00:15:45.361033 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-10 00:15:45.361044 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-10 00:15:45.361055 | orchestrator | ++ CEPH_VERSION=reef 2026-03-10 00:15:45.361066 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-10 00:15:45.361078 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-10 00:15:45.361089 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-10 00:15:45.361100 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-10 00:15:45.361110 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-10 00:15:45.361138 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-10 00:15:45.361152 | orchestrator | ++ export ARA=false 2026-03-10 00:15:45.361164 | orchestrator | ++ ARA=false 2026-03-10 00:15:45.361177 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-10 00:15:45.361189 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-10 00:15:45.361200 | orchestrator | ++ export TEMPEST=true 2026-03-10 00:15:45.361212 | orchestrator | ++ TEMPEST=true 2026-03-10 00:15:45.361225 | orchestrator | ++ export IS_ZUUL=true 2026-03-10 00:15:45.361237 | orchestrator | ++ IS_ZUUL=true 2026-03-10 00:15:45.361249 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.64 2026-03-10 00:15:45.361261 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.64 2026-03-10 00:15:45.361273 | orchestrator | ++ export EXTERNAL_API=false 2026-03-10 00:15:45.361285 | orchestrator | ++ EXTERNAL_API=false 2026-03-10 00:15:45.361298 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-10 00:15:45.361309 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-10 00:15:45.361322 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-10 00:15:45.361335 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-10 00:15:45.361355 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-10 00:15:45.361367 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-10 00:15:45.361379 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-10 00:15:45.419918 | orchestrator | + docker version 2026-03-10 00:15:45.557586 | orchestrator | Client: Docker Engine - Community 2026-03-10 00:15:45.557710 | orchestrator | Version: 27.5.1 2026-03-10 00:15:45.557732 | orchestrator | API version: 1.47 2026-03-10 00:15:45.557747 | orchestrator | Go version: go1.22.11 2026-03-10 00:15:45.557758 | orchestrator | Git commit: 9f9e405 2026-03-10 00:15:45.557769 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-10 00:15:45.557781 | orchestrator | OS/Arch: linux/amd64 2026-03-10 00:15:45.557792 | orchestrator | Context: default 2026-03-10 00:15:45.557802 | orchestrator | 2026-03-10 00:15:45.557814 | orchestrator | Server: Docker Engine - Community 2026-03-10 00:15:45.557825 | orchestrator | Engine: 2026-03-10 00:15:45.557848 | orchestrator | Version: 27.5.1 2026-03-10 00:15:45.557860 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-10 00:15:45.557903 | orchestrator | Go version: go1.22.11 2026-03-10 00:15:45.557915 | orchestrator | Git commit: 4c9b3b0 2026-03-10 00:15:45.557925 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-10 00:15:45.557967 | orchestrator | OS/Arch: linux/amd64 2026-03-10 00:15:45.557978 | orchestrator | Experimental: false 2026-03-10 00:15:45.557989 | orchestrator | containerd: 2026-03-10 00:15:45.558060 | orchestrator | Version: v2.2.1 2026-03-10 00:15:45.558075 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-03-10 00:15:45.558091 | orchestrator | runc: 2026-03-10 00:15:45.558223 | orchestrator | Version: 1.3.4 2026-03-10 00:15:45.558240 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-10 00:15:45.558251 | orchestrator | docker-init: 2026-03-10 00:15:45.558262 | orchestrator | Version: 0.19.0 2026-03-10 00:15:45.558273 | orchestrator | GitCommit: de40ad0 2026-03-10 00:15:45.562421 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-10 00:15:45.572721 | orchestrator | + set -e 2026-03-10 00:15:45.572830 | orchestrator | + source /opt/manager-vars.sh 2026-03-10 00:15:45.572838 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-10 00:15:45.572844 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-10 00:15:45.572848 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-10 00:15:45.572852 | orchestrator | ++ CEPH_VERSION=reef 2026-03-10 00:15:45.572856 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-10 00:15:45.572861 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-10 00:15:45.572865 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-10 00:15:45.572869 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-10 00:15:45.572873 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-10 00:15:45.572877 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-10 00:15:45.572881 | orchestrator | ++ export ARA=false 2026-03-10 00:15:45.572885 | orchestrator | ++ ARA=false 2026-03-10 00:15:45.572889 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-10 00:15:45.572893 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-10 00:15:45.572896 | orchestrator | ++ export TEMPEST=true 2026-03-10 00:15:45.572900 | orchestrator | ++ TEMPEST=true 2026-03-10 00:15:45.572912 | orchestrator | ++ export IS_ZUUL=true 2026-03-10 00:15:45.572915 | orchestrator | ++ IS_ZUUL=true 2026-03-10 00:15:45.572919 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.64 2026-03-10 00:15:45.572923 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.64 2026-03-10 00:15:45.572927 | orchestrator | ++ export EXTERNAL_API=false 2026-03-10 00:15:45.572945 | orchestrator | ++ EXTERNAL_API=false 2026-03-10 00:15:45.572949 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-10 00:15:45.572953 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-10 00:15:45.572957 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-10 00:15:45.572960 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-10 00:15:45.572964 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-10 00:15:45.572968 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-10 00:15:45.572972 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-10 00:15:45.572975 | orchestrator | ++ export INTERACTIVE=false 2026-03-10 00:15:45.572979 | orchestrator | ++ INTERACTIVE=false 2026-03-10 00:15:45.572983 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-10 00:15:45.572989 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-10 00:15:45.573043 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-10 00:15:45.573052 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-03-10 00:15:45.581730 | orchestrator | + set -e 2026-03-10 00:15:45.581789 | orchestrator | + VERSION=9.5.0 2026-03-10 00:15:45.581800 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-03-10 00:15:45.589687 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-10 00:15:45.589780 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-10 00:15:45.594090 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-10 00:15:45.597675 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-03-10 00:15:45.606107 | orchestrator | /opt/configuration ~ 2026-03-10 00:15:45.606165 | orchestrator | + set -e 2026-03-10 00:15:45.606188 | orchestrator | + pushd /opt/configuration 2026-03-10 00:15:45.606202 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-10 00:15:45.607530 | orchestrator | + source /opt/venv/bin/activate 2026-03-10 00:15:45.609531 | orchestrator | ++ deactivate nondestructive 2026-03-10 00:15:45.609564 | orchestrator | ++ '[' -n '' ']' 2026-03-10 00:15:45.609574 | orchestrator | ++ '[' -n '' ']' 2026-03-10 00:15:45.609606 | orchestrator | ++ hash -r 2026-03-10 00:15:45.609613 | orchestrator | ++ '[' -n '' ']' 2026-03-10 00:15:45.609654 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-10 00:15:45.609665 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-10 00:15:45.609677 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-10 00:15:45.609689 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-10 00:15:45.609700 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-10 00:15:45.609711 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-10 00:15:45.609722 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-10 00:15:45.609734 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-10 00:15:45.609753 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-10 00:15:45.609764 | orchestrator | ++ export PATH 2026-03-10 00:15:45.609777 | orchestrator | ++ '[' -n '' ']' 2026-03-10 00:15:45.609788 | orchestrator | ++ '[' -z '' ']' 2026-03-10 00:15:45.609799 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-10 00:15:45.609809 | orchestrator | ++ PS1='(venv) ' 2026-03-10 00:15:45.609821 | orchestrator | ++ export PS1 2026-03-10 00:15:45.609832 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-10 00:15:45.609844 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-10 00:15:45.609856 | orchestrator | ++ hash -r 2026-03-10 00:15:45.609866 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-03-10 00:15:46.893925 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-03-10 00:15:46.894780 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-03-10 00:15:46.896257 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-03-10 00:15:46.897797 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-03-10 00:15:46.898810 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-03-10 00:15:46.909214 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-03-10 00:15:46.910679 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-03-10 00:15:46.911540 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-03-10 00:15:46.912798 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-03-10 00:15:46.944446 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.5) 2026-03-10 00:15:46.945793 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-03-10 00:15:46.947468 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-03-10 00:15:46.948800 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-03-10 00:15:46.952699 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-03-10 00:15:47.166760 | orchestrator | ++ which gilt 2026-03-10 00:15:47.172342 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-03-10 00:15:47.172417 | orchestrator | + /opt/venv/bin/gilt overlay 2026-03-10 00:15:47.450859 | orchestrator | osism.cfg-generics: 2026-03-10 00:15:47.603572 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-03-10 00:15:47.603678 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-03-10 00:15:47.603814 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-03-10 00:15:47.603839 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-03-10 00:15:48.232985 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-03-10 00:15:48.246859 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-03-10 00:15:48.600364 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-03-10 00:15:48.650199 | orchestrator | ~ 2026-03-10 00:15:48.650320 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-10 00:15:48.650336 | orchestrator | + deactivate 2026-03-10 00:15:48.650349 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-10 00:15:48.650361 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-10 00:15:48.650372 | orchestrator | + export PATH 2026-03-10 00:15:48.650383 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-10 00:15:48.650394 | orchestrator | + '[' -n '' ']' 2026-03-10 00:15:48.650407 | orchestrator | + hash -r 2026-03-10 00:15:48.650418 | orchestrator | + '[' -n '' ']' 2026-03-10 00:15:48.650429 | orchestrator | + unset VIRTUAL_ENV 2026-03-10 00:15:48.650441 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-10 00:15:48.650460 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-10 00:15:48.650478 | orchestrator | + unset -f deactivate 2026-03-10 00:15:48.650497 | orchestrator | + popd 2026-03-10 00:15:48.652074 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-10 00:15:48.652129 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-10 00:15:48.652769 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-10 00:15:48.701780 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-10 00:15:48.701883 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-10 00:15:48.702121 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-10 00:15:48.749810 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-10 00:15:48.750656 | orchestrator | ++ semver 2024.2 2025.1 2026-03-10 00:15:48.806534 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-10 00:15:48.806633 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-10 00:15:48.901601 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-10 00:15:48.901687 | orchestrator | + source /opt/venv/bin/activate 2026-03-10 00:15:48.901700 | orchestrator | ++ deactivate nondestructive 2026-03-10 00:15:48.902103 | orchestrator | ++ '[' -n '' ']' 2026-03-10 00:15:48.902122 | orchestrator | ++ '[' -n '' ']' 2026-03-10 00:15:48.902131 | orchestrator | ++ hash -r 2026-03-10 00:15:48.902140 | orchestrator | ++ '[' -n '' ']' 2026-03-10 00:15:48.902149 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-10 00:15:48.902157 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-10 00:15:48.902177 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-10 00:15:48.902245 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-10 00:15:48.902264 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-10 00:15:48.902281 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-10 00:15:48.902301 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-10 00:15:48.902417 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-10 00:15:48.902462 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-10 00:15:48.902518 | orchestrator | ++ export PATH 2026-03-10 00:15:48.902714 | orchestrator | ++ '[' -n '' ']' 2026-03-10 00:15:48.902729 | orchestrator | ++ '[' -z '' ']' 2026-03-10 00:15:48.902890 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-10 00:15:48.902901 | orchestrator | ++ PS1='(venv) ' 2026-03-10 00:15:48.902910 | orchestrator | ++ export PS1 2026-03-10 00:15:48.902919 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-10 00:15:48.902963 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-10 00:15:48.902974 | orchestrator | ++ hash -r 2026-03-10 00:15:48.903031 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-10 00:15:51.534562 | orchestrator | 2026-03-10 00:15:51.534658 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-10 00:15:51.534668 | orchestrator | 2026-03-10 00:15:51.534673 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-10 00:15:52.138811 | orchestrator | ok: [testbed-manager] 2026-03-10 00:15:52.138908 | orchestrator | 2026-03-10 00:15:52.138924 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-10 00:15:53.124489 | orchestrator | changed: [testbed-manager] 2026-03-10 00:15:53.124593 | orchestrator | 2026-03-10 00:15:53.124611 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-10 00:15:53.124659 | orchestrator | 2026-03-10 00:15:53.124672 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-10 00:15:55.504844 | orchestrator | ok: [testbed-manager] 2026-03-10 00:15:55.504979 | orchestrator | 2026-03-10 00:15:55.504996 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-10 00:15:55.556051 | orchestrator | ok: [testbed-manager] 2026-03-10 00:15:55.556145 | orchestrator | 2026-03-10 00:15:55.556161 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-10 00:15:56.015285 | orchestrator | changed: [testbed-manager] 2026-03-10 00:15:56.015411 | orchestrator | 2026-03-10 00:15:56.015443 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-10 00:15:56.061671 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:15:56.061789 | orchestrator | 2026-03-10 00:15:56.061806 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-10 00:15:56.411724 | orchestrator | changed: [testbed-manager] 2026-03-10 00:15:56.411843 | orchestrator | 2026-03-10 00:15:56.411873 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-10 00:15:56.760370 | orchestrator | ok: [testbed-manager] 2026-03-10 00:15:56.760449 | orchestrator | 2026-03-10 00:15:56.760460 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-10 00:15:56.882378 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:15:56.882468 | orchestrator | 2026-03-10 00:15:56.882483 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-10 00:15:56.882496 | orchestrator | 2026-03-10 00:15:56.882508 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-10 00:15:58.657217 | orchestrator | ok: [testbed-manager] 2026-03-10 00:15:58.657314 | orchestrator | 2026-03-10 00:15:58.657330 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-10 00:15:58.769708 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-10 00:15:58.769794 | orchestrator | 2026-03-10 00:15:58.769808 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-10 00:15:58.826217 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-10 00:15:58.826338 | orchestrator | 2026-03-10 00:15:58.826365 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-10 00:16:01.730375 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-10 00:16:01.730483 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-10 00:16:01.730496 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-10 00:16:01.730503 | orchestrator | 2026-03-10 00:16:01.730514 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-10 00:16:03.579076 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-10 00:16:03.579180 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-10 00:16:03.579195 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-10 00:16:03.579207 | orchestrator | 2026-03-10 00:16:03.579220 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-10 00:16:04.253444 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-10 00:16:04.253561 | orchestrator | changed: [testbed-manager] 2026-03-10 00:16:04.253578 | orchestrator | 2026-03-10 00:16:04.253591 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-10 00:16:04.917444 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-10 00:16:04.917535 | orchestrator | changed: [testbed-manager] 2026-03-10 00:16:04.917546 | orchestrator | 2026-03-10 00:16:04.917555 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-10 00:16:04.964121 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:16:04.964172 | orchestrator | 2026-03-10 00:16:04.964200 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-10 00:16:05.393032 | orchestrator | ok: [testbed-manager] 2026-03-10 00:16:05.393179 | orchestrator | 2026-03-10 00:16:05.393207 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-10 00:16:05.471958 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-10 00:16:05.472053 | orchestrator | 2026-03-10 00:16:05.472069 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-10 00:16:06.576434 | orchestrator | changed: [testbed-manager] 2026-03-10 00:16:06.576497 | orchestrator | 2026-03-10 00:16:06.576511 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-10 00:16:08.394782 | orchestrator | changed: [testbed-manager] 2026-03-10 00:16:08.394877 | orchestrator | 2026-03-10 00:16:08.394891 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-10 00:16:24.485801 | orchestrator | changed: [testbed-manager] 2026-03-10 00:16:24.485948 | orchestrator | 2026-03-10 00:16:24.486072 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-10 00:16:24.539833 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:16:24.539963 | orchestrator | 2026-03-10 00:16:24.540017 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-10 00:16:24.540040 | orchestrator | 2026-03-10 00:16:24.540059 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-10 00:16:26.283447 | orchestrator | ok: [testbed-manager] 2026-03-10 00:16:26.283538 | orchestrator | 2026-03-10 00:16:26.283554 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-10 00:16:26.403877 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-10 00:16:26.404037 | orchestrator | 2026-03-10 00:16:26.404054 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-10 00:16:26.463914 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-10 00:16:26.464023 | orchestrator | 2026-03-10 00:16:26.464036 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-10 00:16:29.172905 | orchestrator | ok: [testbed-manager] 2026-03-10 00:16:29.173084 | orchestrator | 2026-03-10 00:16:29.173104 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-10 00:16:29.230825 | orchestrator | ok: [testbed-manager] 2026-03-10 00:16:29.230944 | orchestrator | 2026-03-10 00:16:29.230963 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-10 00:16:29.373174 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-10 00:16:29.373283 | orchestrator | 2026-03-10 00:16:29.373308 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-10 00:16:32.260442 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-10 00:16:32.260518 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-10 00:16:32.260529 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-10 00:16:32.260538 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-10 00:16:32.260547 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-10 00:16:32.260555 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-10 00:16:32.260564 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-10 00:16:32.260573 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-10 00:16:32.260581 | orchestrator | 2026-03-10 00:16:32.260590 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-10 00:16:32.961876 | orchestrator | changed: [testbed-manager] 2026-03-10 00:16:32.961982 | orchestrator | 2026-03-10 00:16:32.961995 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-10 00:16:33.607996 | orchestrator | changed: [testbed-manager] 2026-03-10 00:16:33.608113 | orchestrator | 2026-03-10 00:16:33.608140 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-10 00:16:33.676276 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-10 00:16:33.676379 | orchestrator | 2026-03-10 00:16:33.676395 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-10 00:16:34.906080 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-10 00:16:34.906190 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-10 00:16:34.906206 | orchestrator | 2026-03-10 00:16:34.907115 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-10 00:16:35.528354 | orchestrator | changed: [testbed-manager] 2026-03-10 00:16:35.528467 | orchestrator | 2026-03-10 00:16:35.528489 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-10 00:16:35.584063 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:16:35.584155 | orchestrator | 2026-03-10 00:16:35.584169 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-10 00:16:35.659156 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-10 00:16:35.659252 | orchestrator | 2026-03-10 00:16:35.659267 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-10 00:16:36.324873 | orchestrator | changed: [testbed-manager] 2026-03-10 00:16:36.325021 | orchestrator | 2026-03-10 00:16:36.325048 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-10 00:16:36.392500 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-10 00:16:36.392563 | orchestrator | 2026-03-10 00:16:36.392569 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-10 00:16:37.780876 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-10 00:16:37.781003 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-10 00:16:37.781018 | orchestrator | changed: [testbed-manager] 2026-03-10 00:16:37.781031 | orchestrator | 2026-03-10 00:16:37.781042 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-10 00:16:38.410009 | orchestrator | changed: [testbed-manager] 2026-03-10 00:16:38.410157 | orchestrator | 2026-03-10 00:16:38.410175 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-10 00:16:38.466315 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:16:38.466381 | orchestrator | 2026-03-10 00:16:38.466387 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-10 00:16:38.560070 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-10 00:16:38.560170 | orchestrator | 2026-03-10 00:16:38.560187 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-10 00:16:39.126151 | orchestrator | changed: [testbed-manager] 2026-03-10 00:16:39.126248 | orchestrator | 2026-03-10 00:16:39.126265 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-10 00:16:39.545686 | orchestrator | changed: [testbed-manager] 2026-03-10 00:16:39.545778 | orchestrator | 2026-03-10 00:16:39.545794 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-10 00:16:40.828331 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-10 00:16:40.828432 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-10 00:16:40.828447 | orchestrator | 2026-03-10 00:16:40.828460 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-10 00:16:41.507260 | orchestrator | changed: [testbed-manager] 2026-03-10 00:16:41.507351 | orchestrator | 2026-03-10 00:16:41.507366 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-10 00:16:41.906899 | orchestrator | ok: [testbed-manager] 2026-03-10 00:16:41.907024 | orchestrator | 2026-03-10 00:16:41.907041 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-10 00:16:42.281578 | orchestrator | changed: [testbed-manager] 2026-03-10 00:16:42.281637 | orchestrator | 2026-03-10 00:16:42.281644 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-10 00:16:42.335076 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:16:42.335229 | orchestrator | 2026-03-10 00:16:42.335257 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-10 00:16:42.418450 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-10 00:16:42.418551 | orchestrator | 2026-03-10 00:16:42.418561 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-10 00:16:42.461644 | orchestrator | ok: [testbed-manager] 2026-03-10 00:16:42.461704 | orchestrator | 2026-03-10 00:16:42.461709 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-10 00:16:44.556528 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-10 00:16:44.556630 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-10 00:16:44.556644 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-10 00:16:44.556653 | orchestrator | 2026-03-10 00:16:44.556662 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-10 00:16:45.319764 | orchestrator | changed: [testbed-manager] 2026-03-10 00:16:45.319857 | orchestrator | 2026-03-10 00:16:45.319873 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-10 00:16:46.060420 | orchestrator | changed: [testbed-manager] 2026-03-10 00:16:46.060536 | orchestrator | 2026-03-10 00:16:46.060567 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-10 00:16:46.844026 | orchestrator | changed: [testbed-manager] 2026-03-10 00:16:46.844124 | orchestrator | 2026-03-10 00:16:46.844140 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-10 00:16:46.906589 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-10 00:16:46.906677 | orchestrator | 2026-03-10 00:16:46.906695 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-10 00:16:46.948670 | orchestrator | ok: [testbed-manager] 2026-03-10 00:16:46.948755 | orchestrator | 2026-03-10 00:16:46.948771 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-10 00:16:47.664687 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-10 00:16:47.664793 | orchestrator | 2026-03-10 00:16:47.664814 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-10 00:16:47.739718 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-10 00:16:47.739787 | orchestrator | 2026-03-10 00:16:47.739796 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-10 00:16:48.470095 | orchestrator | changed: [testbed-manager] 2026-03-10 00:16:48.470193 | orchestrator | 2026-03-10 00:16:48.470209 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-10 00:16:49.073952 | orchestrator | ok: [testbed-manager] 2026-03-10 00:16:49.074094 | orchestrator | 2026-03-10 00:16:49.074113 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-10 00:16:49.136648 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:16:49.136741 | orchestrator | 2026-03-10 00:16:49.136757 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-10 00:16:49.195968 | orchestrator | ok: [testbed-manager] 2026-03-10 00:16:49.196061 | orchestrator | 2026-03-10 00:16:49.196076 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-10 00:16:50.045562 | orchestrator | changed: [testbed-manager] 2026-03-10 00:16:50.045665 | orchestrator | 2026-03-10 00:16:50.045682 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-10 00:18:10.106446 | orchestrator | changed: [testbed-manager] 2026-03-10 00:18:10.106569 | orchestrator | 2026-03-10 00:18:10.106587 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-10 00:18:11.112522 | orchestrator | ok: [testbed-manager] 2026-03-10 00:18:11.112630 | orchestrator | 2026-03-10 00:18:11.112647 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-10 00:18:11.174796 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:18:11.174888 | orchestrator | 2026-03-10 00:18:11.174948 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-10 00:18:13.612618 | orchestrator | changed: [testbed-manager] 2026-03-10 00:18:13.612728 | orchestrator | 2026-03-10 00:18:13.612746 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-10 00:18:13.672960 | orchestrator | ok: [testbed-manager] 2026-03-10 00:18:13.673070 | orchestrator | 2026-03-10 00:18:13.673097 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-10 00:18:13.673119 | orchestrator | 2026-03-10 00:18:13.673141 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-10 00:18:13.794270 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:18:13.794357 | orchestrator | 2026-03-10 00:18:13.794372 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-10 00:19:13.846502 | orchestrator | Pausing for 60 seconds 2026-03-10 00:19:13.846622 | orchestrator | changed: [testbed-manager] 2026-03-10 00:19:13.846637 | orchestrator | 2026-03-10 00:19:13.846649 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-10 00:19:16.994320 | orchestrator | changed: [testbed-manager] 2026-03-10 00:19:16.994442 | orchestrator | 2026-03-10 00:19:16.994467 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-10 00:20:18.959414 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-10 00:20:18.959526 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-10 00:20:18.959574 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-03-10 00:20:18.959593 | orchestrator | changed: [testbed-manager] 2026-03-10 00:20:18.959612 | orchestrator | 2026-03-10 00:20:18.959629 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-10 00:20:30.115413 | orchestrator | changed: [testbed-manager] 2026-03-10 00:20:30.115522 | orchestrator | 2026-03-10 00:20:30.115535 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-10 00:20:30.213294 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-10 00:20:30.213374 | orchestrator | 2026-03-10 00:20:30.213383 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-10 00:20:30.213390 | orchestrator | 2026-03-10 00:20:30.213396 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-10 00:20:30.273047 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:20:30.273151 | orchestrator | 2026-03-10 00:20:30.273171 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-10 00:20:30.339544 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-10 00:20:30.339630 | orchestrator | 2026-03-10 00:20:30.339641 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-10 00:20:31.139655 | orchestrator | changed: [testbed-manager] 2026-03-10 00:20:31.139767 | orchestrator | 2026-03-10 00:20:31.139786 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-10 00:20:34.503684 | orchestrator | ok: [testbed-manager] 2026-03-10 00:20:34.503790 | orchestrator | 2026-03-10 00:20:34.503805 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-10 00:20:34.596402 | orchestrator | ok: [testbed-manager] => { 2026-03-10 00:20:34.596509 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-10 00:20:34.596524 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-10 00:20:34.596536 | orchestrator | "Checking running containers against expected versions...", 2026-03-10 00:20:34.596548 | orchestrator | "", 2026-03-10 00:20:34.596561 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-10 00:20:34.596572 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-03-10 00:20:34.596584 | orchestrator | " Enabled: true", 2026-03-10 00:20:34.596595 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-03-10 00:20:34.596606 | orchestrator | " Status: ✅ MATCH", 2026-03-10 00:20:34.596617 | orchestrator | "", 2026-03-10 00:20:34.596628 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-10 00:20:34.596639 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-03-10 00:20:34.596676 | orchestrator | " Enabled: true", 2026-03-10 00:20:34.596688 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-03-10 00:20:34.596698 | orchestrator | " Status: ✅ MATCH", 2026-03-10 00:20:34.596709 | orchestrator | "", 2026-03-10 00:20:34.596720 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-10 00:20:34.596730 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-03-10 00:20:34.596741 | orchestrator | " Enabled: true", 2026-03-10 00:20:34.596751 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-03-10 00:20:34.596762 | orchestrator | " Status: ✅ MATCH", 2026-03-10 00:20:34.596773 | orchestrator | "", 2026-03-10 00:20:34.596783 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-10 00:20:34.596794 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-03-10 00:20:34.596805 | orchestrator | " Enabled: true", 2026-03-10 00:20:34.596815 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-03-10 00:20:34.596826 | orchestrator | " Status: ✅ MATCH", 2026-03-10 00:20:34.596836 | orchestrator | "", 2026-03-10 00:20:34.596849 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-10 00:20:34.596860 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-03-10 00:20:34.596871 | orchestrator | " Enabled: true", 2026-03-10 00:20:34.596881 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-03-10 00:20:34.596919 | orchestrator | " Status: ✅ MATCH", 2026-03-10 00:20:34.596930 | orchestrator | "", 2026-03-10 00:20:34.596944 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-10 00:20:34.596955 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-10 00:20:34.596968 | orchestrator | " Enabled: true", 2026-03-10 00:20:34.596979 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-10 00:20:34.596991 | orchestrator | " Status: ✅ MATCH", 2026-03-10 00:20:34.597004 | orchestrator | "", 2026-03-10 00:20:34.597015 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-10 00:20:34.597027 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-10 00:20:34.597040 | orchestrator | " Enabled: true", 2026-03-10 00:20:34.597053 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-10 00:20:34.597065 | orchestrator | " Status: ✅ MATCH", 2026-03-10 00:20:34.597077 | orchestrator | "", 2026-03-10 00:20:34.597089 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-10 00:20:34.597101 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-10 00:20:34.597114 | orchestrator | " Enabled: true", 2026-03-10 00:20:34.597126 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-10 00:20:34.597137 | orchestrator | " Status: ✅ MATCH", 2026-03-10 00:20:34.597150 | orchestrator | "", 2026-03-10 00:20:34.597162 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-10 00:20:34.597173 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-03-10 00:20:34.597185 | orchestrator | " Enabled: true", 2026-03-10 00:20:34.597197 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-03-10 00:20:34.597209 | orchestrator | " Status: ✅ MATCH", 2026-03-10 00:20:34.597221 | orchestrator | "", 2026-03-10 00:20:34.597234 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-10 00:20:34.597245 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-10 00:20:34.597257 | orchestrator | " Enabled: true", 2026-03-10 00:20:34.597268 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-10 00:20:34.597281 | orchestrator | " Status: ✅ MATCH", 2026-03-10 00:20:34.597293 | orchestrator | "", 2026-03-10 00:20:34.597303 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-10 00:20:34.597314 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-10 00:20:34.597333 | orchestrator | " Enabled: true", 2026-03-10 00:20:34.597355 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-10 00:20:34.597366 | orchestrator | " Status: ✅ MATCH", 2026-03-10 00:20:34.597377 | orchestrator | "", 2026-03-10 00:20:34.597387 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-10 00:20:34.597398 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-10 00:20:34.597409 | orchestrator | " Enabled: true", 2026-03-10 00:20:34.597419 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-10 00:20:34.597430 | orchestrator | " Status: ✅ MATCH", 2026-03-10 00:20:34.597442 | orchestrator | "", 2026-03-10 00:20:34.597452 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-10 00:20:34.597463 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-10 00:20:34.597473 | orchestrator | " Enabled: true", 2026-03-10 00:20:34.597484 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-10 00:20:34.597495 | orchestrator | " Status: ✅ MATCH", 2026-03-10 00:20:34.597505 | orchestrator | "", 2026-03-10 00:20:34.597516 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-10 00:20:34.597526 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-10 00:20:34.597537 | orchestrator | " Enabled: true", 2026-03-10 00:20:34.597548 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-10 00:20:34.597574 | orchestrator | " Status: ✅ MATCH", 2026-03-10 00:20:34.597585 | orchestrator | "", 2026-03-10 00:20:34.597596 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-10 00:20:34.597607 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-10 00:20:34.597627 | orchestrator | " Enabled: true", 2026-03-10 00:20:34.597638 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-10 00:20:34.597649 | orchestrator | " Status: ✅ MATCH", 2026-03-10 00:20:34.597660 | orchestrator | "", 2026-03-10 00:20:34.597670 | orchestrator | "=== Summary ===", 2026-03-10 00:20:34.597681 | orchestrator | "Errors (version mismatches): 0", 2026-03-10 00:20:34.597692 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-10 00:20:34.597703 | orchestrator | "", 2026-03-10 00:20:34.597714 | orchestrator | "✅ All running containers match expected versions!" 2026-03-10 00:20:34.597725 | orchestrator | ] 2026-03-10 00:20:34.597736 | orchestrator | } 2026-03-10 00:20:34.597747 | orchestrator | 2026-03-10 00:20:34.597758 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-10 00:20:34.653987 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:20:34.654114 | orchestrator | 2026-03-10 00:20:34.654127 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:20:34.654139 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-10 00:20:34.654147 | orchestrator | 2026-03-10 00:20:34.768682 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-10 00:20:34.768780 | orchestrator | + deactivate 2026-03-10 00:20:34.768795 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-10 00:20:34.768807 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-10 00:20:34.768817 | orchestrator | + export PATH 2026-03-10 00:20:34.768826 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-10 00:20:34.768836 | orchestrator | + '[' -n '' ']' 2026-03-10 00:20:34.768846 | orchestrator | + hash -r 2026-03-10 00:20:34.768964 | orchestrator | + '[' -n '' ']' 2026-03-10 00:20:34.768979 | orchestrator | + unset VIRTUAL_ENV 2026-03-10 00:20:34.768988 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-10 00:20:34.768998 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-10 00:20:34.769007 | orchestrator | + unset -f deactivate 2026-03-10 00:20:34.769019 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-10 00:20:34.780933 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-10 00:20:34.780979 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-10 00:20:34.781017 | orchestrator | + local max_attempts=60 2026-03-10 00:20:34.781028 | orchestrator | + local name=ceph-ansible 2026-03-10 00:20:34.781039 | orchestrator | + local attempt_num=1 2026-03-10 00:20:34.782128 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-10 00:20:34.821928 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-10 00:20:34.822010 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-10 00:20:34.822075 | orchestrator | + local max_attempts=60 2026-03-10 00:20:34.822087 | orchestrator | + local name=kolla-ansible 2026-03-10 00:20:34.822098 | orchestrator | + local attempt_num=1 2026-03-10 00:20:34.822713 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-10 00:20:34.858310 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-10 00:20:34.858399 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-10 00:20:34.858412 | orchestrator | + local max_attempts=60 2026-03-10 00:20:34.858423 | orchestrator | + local name=osism-ansible 2026-03-10 00:20:34.858434 | orchestrator | + local attempt_num=1 2026-03-10 00:20:34.859679 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-10 00:20:34.900333 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-10 00:20:34.900429 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-10 00:20:34.900444 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-10 00:20:35.640857 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-10 00:20:35.837208 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-10 00:20:35.837313 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-03-10 00:20:35.837330 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-03-10 00:20:35.837342 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-03-10 00:20:35.837356 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-03-10 00:20:35.837389 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-03-10 00:20:35.837401 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-03-10 00:20:35.837412 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-03-10 00:20:35.837423 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-03-10 00:20:35.837434 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-03-10 00:20:35.837445 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-03-10 00:20:35.837456 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-03-10 00:20:35.837467 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-03-10 00:20:35.837502 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-03-10 00:20:35.837514 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-03-10 00:20:35.837526 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-03-10 00:20:35.842611 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-10 00:20:35.884886 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-10 00:20:35.885033 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-10 00:20:35.887706 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-10 00:20:48.308828 | orchestrator | 2026-03-10 00:20:48 | INFO  | Task 6f7bee79-e774-4f92-b75b-e6ab1b6e708c (resolvconf) was prepared for execution. 2026-03-10 00:20:48.308965 | orchestrator | 2026-03-10 00:20:48 | INFO  | It takes a moment until task 6f7bee79-e774-4f92-b75b-e6ab1b6e708c (resolvconf) has been started and output is visible here. 2026-03-10 00:21:02.167331 | orchestrator | 2026-03-10 00:21:02.167478 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-10 00:21:02.167506 | orchestrator | 2026-03-10 00:21:02.167527 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-10 00:21:02.167548 | orchestrator | Tuesday 10 March 2026 00:20:52 +0000 (0:00:00.129) 0:00:00.129 ********* 2026-03-10 00:21:02.167569 | orchestrator | ok: [testbed-manager] 2026-03-10 00:21:02.167589 | orchestrator | 2026-03-10 00:21:02.167607 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-10 00:21:02.167627 | orchestrator | Tuesday 10 March 2026 00:20:56 +0000 (0:00:03.528) 0:00:03.658 ********* 2026-03-10 00:21:02.167646 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:21:02.167667 | orchestrator | 2026-03-10 00:21:02.167686 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-10 00:21:02.167706 | orchestrator | Tuesday 10 March 2026 00:20:56 +0000 (0:00:00.050) 0:00:03.709 ********* 2026-03-10 00:21:02.167725 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-10 00:21:02.167746 | orchestrator | 2026-03-10 00:21:02.167767 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-10 00:21:02.167785 | orchestrator | Tuesday 10 March 2026 00:20:56 +0000 (0:00:00.082) 0:00:03.791 ********* 2026-03-10 00:21:02.167826 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-10 00:21:02.167850 | orchestrator | 2026-03-10 00:21:02.167870 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-10 00:21:02.167893 | orchestrator | Tuesday 10 March 2026 00:20:56 +0000 (0:00:00.059) 0:00:03.851 ********* 2026-03-10 00:21:02.167947 | orchestrator | ok: [testbed-manager] 2026-03-10 00:21:02.167966 | orchestrator | 2026-03-10 00:21:02.167986 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-10 00:21:02.168005 | orchestrator | Tuesday 10 March 2026 00:20:57 +0000 (0:00:00.934) 0:00:04.786 ********* 2026-03-10 00:21:02.168025 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:21:02.168046 | orchestrator | 2026-03-10 00:21:02.168077 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-10 00:21:02.168100 | orchestrator | Tuesday 10 March 2026 00:20:57 +0000 (0:00:00.067) 0:00:04.854 ********* 2026-03-10 00:21:02.168154 | orchestrator | ok: [testbed-manager] 2026-03-10 00:21:02.168177 | orchestrator | 2026-03-10 00:21:02.168200 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-10 00:21:02.168220 | orchestrator | Tuesday 10 March 2026 00:20:57 +0000 (0:00:00.505) 0:00:05.360 ********* 2026-03-10 00:21:02.168241 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:21:02.168260 | orchestrator | 2026-03-10 00:21:02.168280 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-10 00:21:02.168299 | orchestrator | Tuesday 10 March 2026 00:20:57 +0000 (0:00:00.088) 0:00:05.448 ********* 2026-03-10 00:21:02.168341 | orchestrator | changed: [testbed-manager] 2026-03-10 00:21:02.168375 | orchestrator | 2026-03-10 00:21:02.168395 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-10 00:21:02.168414 | orchestrator | Tuesday 10 March 2026 00:20:58 +0000 (0:00:00.578) 0:00:06.027 ********* 2026-03-10 00:21:02.168434 | orchestrator | changed: [testbed-manager] 2026-03-10 00:21:02.168519 | orchestrator | 2026-03-10 00:21:02.168541 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-10 00:21:02.168559 | orchestrator | Tuesday 10 March 2026 00:20:59 +0000 (0:00:01.127) 0:00:07.154 ********* 2026-03-10 00:21:02.168580 | orchestrator | ok: [testbed-manager] 2026-03-10 00:21:02.168600 | orchestrator | 2026-03-10 00:21:02.168619 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-10 00:21:02.168637 | orchestrator | Tuesday 10 March 2026 00:21:00 +0000 (0:00:00.996) 0:00:08.151 ********* 2026-03-10 00:21:02.168655 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-10 00:21:02.168675 | orchestrator | 2026-03-10 00:21:02.168694 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-10 00:21:02.168713 | orchestrator | Tuesday 10 March 2026 00:21:00 +0000 (0:00:00.097) 0:00:08.249 ********* 2026-03-10 00:21:02.168733 | orchestrator | changed: [testbed-manager] 2026-03-10 00:21:02.168752 | orchestrator | 2026-03-10 00:21:02.168771 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:21:02.168792 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-10 00:21:02.168812 | orchestrator | 2026-03-10 00:21:02.168832 | orchestrator | 2026-03-10 00:21:02.168851 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:21:02.168872 | orchestrator | Tuesday 10 March 2026 00:21:01 +0000 (0:00:01.299) 0:00:09.549 ********* 2026-03-10 00:21:02.168892 | orchestrator | =============================================================================== 2026-03-10 00:21:02.168963 | orchestrator | Gathering Facts --------------------------------------------------------- 3.53s 2026-03-10 00:21:02.168980 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.30s 2026-03-10 00:21:02.168997 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.13s 2026-03-10 00:21:02.169014 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.00s 2026-03-10 00:21:02.169031 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.93s 2026-03-10 00:21:02.169048 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.58s 2026-03-10 00:21:02.169093 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.51s 2026-03-10 00:21:02.169111 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.10s 2026-03-10 00:21:02.169127 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2026-03-10 00:21:02.169145 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-03-10 00:21:02.169162 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-03-10 00:21:02.169180 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.06s 2026-03-10 00:21:02.169211 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.05s 2026-03-10 00:21:02.513329 | orchestrator | + osism apply sshconfig 2026-03-10 00:21:14.674651 | orchestrator | 2026-03-10 00:21:14 | INFO  | Task 3db96175-2b4a-496c-bd2b-f6025a22a81b (sshconfig) was prepared for execution. 2026-03-10 00:21:14.674770 | orchestrator | 2026-03-10 00:21:14 | INFO  | It takes a moment until task 3db96175-2b4a-496c-bd2b-f6025a22a81b (sshconfig) has been started and output is visible here. 2026-03-10 00:21:26.974631 | orchestrator | 2026-03-10 00:21:26.974752 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-10 00:21:26.974771 | orchestrator | 2026-03-10 00:21:26.974806 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-10 00:21:26.974819 | orchestrator | Tuesday 10 March 2026 00:21:19 +0000 (0:00:00.159) 0:00:00.159 ********* 2026-03-10 00:21:26.974830 | orchestrator | ok: [testbed-manager] 2026-03-10 00:21:26.974842 | orchestrator | 2026-03-10 00:21:26.974853 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-10 00:21:26.974864 | orchestrator | Tuesday 10 March 2026 00:21:19 +0000 (0:00:00.541) 0:00:00.701 ********* 2026-03-10 00:21:26.974875 | orchestrator | changed: [testbed-manager] 2026-03-10 00:21:26.974887 | orchestrator | 2026-03-10 00:21:26.974956 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-10 00:21:26.974979 | orchestrator | Tuesday 10 March 2026 00:21:20 +0000 (0:00:00.543) 0:00:01.244 ********* 2026-03-10 00:21:26.974998 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-10 00:21:26.975011 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-10 00:21:26.975022 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-10 00:21:26.975033 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-10 00:21:26.975043 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-10 00:21:26.975054 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-10 00:21:26.975065 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-10 00:21:26.975075 | orchestrator | 2026-03-10 00:21:26.975086 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-10 00:21:26.975097 | orchestrator | Tuesday 10 March 2026 00:21:26 +0000 (0:00:05.932) 0:00:07.176 ********* 2026-03-10 00:21:26.975107 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:21:26.975118 | orchestrator | 2026-03-10 00:21:26.975128 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-10 00:21:26.975139 | orchestrator | Tuesday 10 March 2026 00:21:26 +0000 (0:00:00.065) 0:00:07.242 ********* 2026-03-10 00:21:26.975149 | orchestrator | changed: [testbed-manager] 2026-03-10 00:21:26.975160 | orchestrator | 2026-03-10 00:21:26.975186 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:21:26.975200 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-10 00:21:26.975213 | orchestrator | 2026-03-10 00:21:26.975225 | orchestrator | 2026-03-10 00:21:26.975237 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:21:26.975249 | orchestrator | Tuesday 10 March 2026 00:21:26 +0000 (0:00:00.604) 0:00:07.847 ********* 2026-03-10 00:21:26.975261 | orchestrator | =============================================================================== 2026-03-10 00:21:26.975273 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.93s 2026-03-10 00:21:26.975286 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.61s 2026-03-10 00:21:26.975298 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.54s 2026-03-10 00:21:26.975310 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.54s 2026-03-10 00:21:26.975323 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2026-03-10 00:21:27.288195 | orchestrator | + osism apply known-hosts 2026-03-10 00:21:39.439796 | orchestrator | 2026-03-10 00:21:39 | INFO  | Task 5295f529-edc1-4477-98bc-0cfc317859f9 (known-hosts) was prepared for execution. 2026-03-10 00:21:39.439978 | orchestrator | 2026-03-10 00:21:39 | INFO  | It takes a moment until task 5295f529-edc1-4477-98bc-0cfc317859f9 (known-hosts) has been started and output is visible here. 2026-03-10 00:21:56.769243 | orchestrator | 2026-03-10 00:21:56.769367 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-10 00:21:56.769385 | orchestrator | 2026-03-10 00:21:56.769397 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-10 00:21:56.769409 | orchestrator | Tuesday 10 March 2026 00:21:43 +0000 (0:00:00.164) 0:00:00.164 ********* 2026-03-10 00:21:56.769421 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-10 00:21:56.769432 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-10 00:21:56.769443 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-10 00:21:56.769454 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-10 00:21:56.769465 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-10 00:21:56.769476 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-10 00:21:56.769486 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-10 00:21:56.769497 | orchestrator | 2026-03-10 00:21:56.769508 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-10 00:21:56.769520 | orchestrator | Tuesday 10 March 2026 00:21:49 +0000 (0:00:06.074) 0:00:06.239 ********* 2026-03-10 00:21:56.769531 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-10 00:21:56.769545 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-10 00:21:56.769556 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-10 00:21:56.769567 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-10 00:21:56.769578 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-10 00:21:56.769600 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-10 00:21:56.769611 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-10 00:21:56.769622 | orchestrator | 2026-03-10 00:21:56.769633 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-10 00:21:56.769644 | orchestrator | Tuesday 10 March 2026 00:21:49 +0000 (0:00:00.152) 0:00:06.391 ********* 2026-03-10 00:21:56.769654 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIItt6pIbDG6F8pMhC9xV9Q8DONv1ZWCTqAY0WTa6Alz3) 2026-03-10 00:21:56.769677 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC76//Y2ZluUHfbtANb69e08ziwZIIA1CHrmPR3KEtOu0o4M5yu47cCMtD+WEZgJQda8t/Z4gbEFpmJnK6SZ/AErB7P0cYVAdzSUDaD11qT5yCKRLcj9DTV9CuR+vT/e3bczPmwf0e70e1NFr1XrOrWlM20qWwNbRVXTce3197Wib9pidrVt0TPykUS84aEEV9jbYYNGdkVU4JR0FrQVUFuw9aJNKbnAXq7hxKS05b3DMe/1mppoU+UFOOi8nyT2pmRkUoKyfFnWpkbd2ZYqjqVsNm5QXKJbBcWwj4cPqQmwdWgLKS2rNtsiPTWL0e0G8EyC5RVdBZ5e3+qu0kbe2PL7Gn4mojXr2syZvTW0fNwx2mwHjUFqtIo+n+PhKpk3YOyDg6NAiPDirzs8JH0bJNqMfkBoS8SlN9iZBB7tUgyUCWUjhglOG3k+OjPbcuwITNxFlbTEUU4yOnMPc7t83+FDXIQjOA4hg46NN+SnPIs6jB/1rLcx8RgdLVOUjW7rnU=) 2026-03-10 00:21:56.769722 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJpPm7nXhsEFly5XJRJXftxKHAR5D2c9Qv5+lbf0/Hkm2oETMmb9fPtP2rP0QW2dJWftWrcnqXxnpFcgzu7hyjk=) 2026-03-10 00:21:56.769737 | orchestrator | 2026-03-10 00:21:56.769748 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-10 00:21:56.769758 | orchestrator | Tuesday 10 March 2026 00:21:50 +0000 (0:00:01.090) 0:00:07.482 ********* 2026-03-10 00:21:56.769788 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2Fm73x7NZzMhN345Y/ECp3xrMefFizkfscxk0ZtKjewi1EJ9N9VVNfrWtxBZlEsaBou4o+XhIkH2je8iPlXipUfYEX+tXTIj2EZSnvGmr/3QAyy423f3jDmOE0Fw0Ne3CKsNSgHk6btt66UN1A4LvDDnyb738upRFFodtm6/SSeWJsoLhgXLCP+KLnDpFzIo3biWNem7i3DD4cAwAoJiVNNT6+mAM+4VhLODrxRilWq8W5ay3PlFZXVDhSfz6bgoh8GbI8N2aWLSv/YjSVhJZ/O3BTmXJfA+/dtBRv2BbEixUuFHY5kxR9358rjdugsmseyt4EW0Z3t7x/YYMuQm9O9HHLkUqpmkyKEYO1UuUUAlrQnBWCe4CVEd0qXSOLHR/veXZYlmg5vZjaegAVf5YrXRq5wyUoCbEuPogacL1srJatV1lOnSdy+rfx5L33T2IoN7+5njOKg9mCQLGb9D+gTWI/SyOg0lvbcYxlmR20jATn8j0QVRYbze9lUB25CU=) 2026-03-10 00:21:56.769803 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBCkONImnnNhzXXoIFUG3Z58o6Yq/Af73n28gNwKRqhq) 2026-03-10 00:21:56.769815 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMJGjJADz8hZFfSU1YllWMZwdBCTQO8F/dqOihoDNd28+xfzj34JHwI/NwanWo4aHccqobTeN/FNRWxwv/ufzFk=) 2026-03-10 00:21:56.769827 | orchestrator | 2026-03-10 00:21:56.769839 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-10 00:21:56.769851 | orchestrator | Tuesday 10 March 2026 00:21:52 +0000 (0:00:01.188) 0:00:08.670 ********* 2026-03-10 00:21:56.769863 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAArLeakGn6K/CgYnINIzMVFbMKG0AjewlsJBP81rjTqJyuZC4Cet+6QGLUn6g9kD4jymJwXWivwLGLYF8R2XFY=) 2026-03-10 00:21:56.769876 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDVVOpVX09sVKCQfCReNX16ZOwCuFx6IEHoHnxPFyES0ZtshMVVJC1n6tGc1O6JDe3Ln7U6q/nMU5AQwSiSe1yO448nEF6w6FSJ/HbQnb4Xpys6UMVojlQt52B9nDwEshEHTSOL5tu2Bs1IjiSV0SUe2N4PpkRahnZ6woaPNO+kZq/HkUu2uRwr0G9iS8rWWFcf96eS6DeDYkFYZOCJLcIwekgwJvsW4hhyeR6hD1qJGO4c6M/a+BW+ODqPJW/y4yB1aB+kMEyIolhs5DMC68zpA38dEjqfy7OO4vi7jA7wwm+Lm9LloTZr2vZA5V6s6Z11OTfylwYrvLMwP3iMjAL8LlzPAAgpcJTIEzgwO4VkXuYwdvMOG4YABcaE08pAQKsLTMLcoyUXK5QVgy5NyNc4KgpuKTja3CkxsOhBlypIu9HWu8K+08gWcPmfWvyPf6y2deoEbxHanOv43B4oooZi5SljJKYxJRzWzorrPhkS3yWTIsS5UkOOFaN/kmRT5gE=) 2026-03-10 00:21:56.769890 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPVmHT0x+IS32BWwbMfbI+ClkoJc1gUEs/NojrjhZHin) 2026-03-10 00:21:56.769985 | orchestrator | 2026-03-10 00:21:56.770008 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-10 00:21:56.770103 | orchestrator | Tuesday 10 March 2026 00:21:53 +0000 (0:00:01.174) 0:00:09.845 ********* 2026-03-10 00:21:56.770118 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMFWC+PMc1Hi651uRr9HF8lDBuwjypL08bPYhcHhO4E5) 2026-03-10 00:21:56.770131 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC96sWwUB8tCa7M312lZhsvKrh0+qqYzOaUsiaLabkwdBO97bpvkXV4R7sIxahAldMFojEv53QJCBOzJzA75MOOs2Sm4vA+H/3Dytu1RXwAiuTLSIrn4KljTZKeH+afM2nJKVQp4OtvY4usJqbX6ES2EQ61ktbfHHw54fBa7zo6B9Tcr6cK3sCYPnWTiHqyYTcmSqf7tduD7Ejvk02pBY1E2535Mu3DACAjPEsRtK1cMXYxiZRXWqg6vBouzenR04Bo07X7ZfmfvNEYT3BJP5BCo8mAQdJWFsDO3NAX9s+j/XvAn+445xVLalBs9QiHxCITGodBKN94haBgwUX8mG2/Mu3R1PEyfVxuyJowcgKvZ8Qa//KfRzlgSfpsmWtn/0kp7szudL2Ze5VKZMdseNWTb1A9ryNGBGtWQcdVioZtjzw46wUBftUZeaPu08V+AAcbl9IImrGJLLPdGqnZxh9xutyaYQnmV3mSFagwhFt1rmq5BsMNsEMglhsH06zyehE=) 2026-03-10 00:21:56.770155 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBF/DMGxkkWOWx718Y8wYCHtypMedpBN9eNB/rs/Lrs+WU07s2HcAn6CyRuxRBT/VG/it+aBMs0BU8kyQJ72wL4U=) 2026-03-10 00:21:56.770166 | orchestrator | 2026-03-10 00:21:56.770176 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-10 00:21:56.770187 | orchestrator | Tuesday 10 March 2026 00:21:54 +0000 (0:00:01.116) 0:00:10.962 ********* 2026-03-10 00:21:56.770320 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICmQWLd4tmTzC7s+usY4w/4Jm+ZlNQOIPsFFsXC5F6Hv) 2026-03-10 00:21:56.770346 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCscdio1KN2s/WPn0Y07+eVthnX6xXTkxjz0WemsoDZvNPIC8sDyDtVihaWtgR/J6X/9ghcXbRB6+qcNeCJRhJzWSziHuuSg1pZMnGg8cOnDA5KyTxK0OSksTRgQ9yVbRTohpKPF1JjdrSd/YHxbW6QtUrG7rJIRJ7glXhd33YjfKb8EpYXscO8zmuvbz8xPUWj7Rh/1WGVYK1hP64MJUGYmK+j2iZEMBB4LzOi8KJtOEMIRk1qTtjFE8mwvRaY+nldqaNkzLal7dYvX4E743cyCXMbMNo+gmQIC07tDdoIJIiRjb3ehBFqnkO6gd+cewN67Eu2MDzKWY7hPfJN3NZw7ZtVZosO8AVlNjFn4XZtx0imPefCo6arkDegNVQupUFTR3tjbiQaKv2vI+9nT5danP6oeMvjQSFV8lLMKsEV3cW4QZuT4qswq37J54ZN3QXeRpR41wyHa8uZ98a1LjcFyFna1YLKjhLbNCFVM9AT3NjL+48gnZnCiwtaRovzfyc=) 2026-03-10 00:21:56.770366 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGvJrU9Q7EZFCaWHcbvqfm18pJAFICyfrQFRBg2QZ9clKn3BRoMtRalYX+/wbOq+TzRw6VGvMmtmYqKoDgnE35k=) 2026-03-10 00:21:56.770377 | orchestrator | 2026-03-10 00:21:56.770388 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-10 00:21:56.770398 | orchestrator | Tuesday 10 March 2026 00:21:55 +0000 (0:00:01.153) 0:00:12.115 ********* 2026-03-10 00:21:56.770424 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4zsiwAKduN8hxX0eHIMZdDLeEE5/onVZO+D9/0sG2iRAbY5DHv3ocy1+eVnC9N1mVELrJb6IwxnztdWsK2extcEuytyyNWsKjr1mg3n5rzXbPYRPxrcAWGUtbyYEVGb6AFJFk1syU+fzHeSHviwSBd/diXj9gO7SDeFy2aI4APzLR4gIc1pVm3NHtkhw0KWuZMWe6yQGNzhOooMT43s13os1rcvhjafMVeJHn2ozAaV3FV5TIYecHXOlQDDV16dhZyv2oEs4xsqWgwj2NgWaiZ0b7sXi/gx6go0XoqoMZLb2ka0E8jddSDsDwztShSz5Hob8ucnj1fU5EX3wmWKj8xD4a+bkazwLwk/t4pxryivdH8EIi/wOF8I8SW1Rkl1QnVwXT8fwWVEuzQpCoa2W6V21tjeC3Od1gkE7VMur+IyM5UPBzGexDR+kjcZiEpHnG5nr+7S/JOOnKXjeQ5jTcfQFpV3mF+pRh3YAw+1MtLnIoPpRS0iOMznSfRTCGiOk=) 2026-03-10 00:22:08.092798 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA87TZ2O18AXyHKgM+DrH3oZYPvrZCIA30twWXlXOyFeWMbQf8uCoLWBFk69rvANFMZnCg3Hi8JtQjmlfkeisOA=) 2026-03-10 00:22:08.092995 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDWczBwMz7SB864B0sBHhCBEYN3fUXRJXhU2n/bOrRBM) 2026-03-10 00:22:08.093028 | orchestrator | 2026-03-10 00:22:08.093049 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-10 00:22:08.093070 | orchestrator | Tuesday 10 March 2026 00:21:56 +0000 (0:00:01.160) 0:00:13.276 ********* 2026-03-10 00:22:08.093081 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPzwZBXQpsuEN1wlrZP/zQ9lNrFV3o+//fajtXfpfuZuEtwCV00WvAM3ZGkNqI+AYbyECTvbuJmUASBfNDTsrkw=) 2026-03-10 00:22:08.093096 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCUrxqsy6XHwxRZho0AHH2jUnCwplKEh4bxjSJdaB9dMn0ajh3q0oL5WggXbnR+I8p3A/A+OWwMyoTo4az1btZfptvZh60GUM2csaeP1voEQd8Nl26Azn8XwON7BPs6ZkBNp7UZjWYXbBoAzLk4yNfRvreoBfNSVBlbeg0iB2B2tfmOZ5P4PoNnldWGfEpEsPGqVhskL66RlJJh6fNp9zxxVB6p6o/vqIuVdgLb5A+Rcy9uX71hHK55Qkdux8C5+C+IUKrQjwtsg0oyT/144hBffthLIzop/V2Io2lLS7374cNO+IGRU2gpTKW80x+wWGDu3AqfGa84GAmlVp9RyzleAtkNonbId5SMjTjJMpso7ks2HdlKXsT9jJ1QKMStIfYatx0IgiND8sAL3ndrGFbJGtdYEmQFwNwq5DYDn037X25/4PSfUxtNovUHd0FlDgNJ2Lu0RDMRa//cjuQHYrDxPxVQ6bFBpmI0CU8OOzggJRD4l0Nc9mAyBl6B888X180=) 2026-03-10 00:22:08.093139 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGyLDLzSk49QsYBvjUJvA1mTJw3cKJv7UBaCqG8AUx5p) 2026-03-10 00:22:08.093158 | orchestrator | 2026-03-10 00:22:08.093176 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-10 00:22:08.093194 | orchestrator | Tuesday 10 March 2026 00:21:57 +0000 (0:00:01.087) 0:00:14.363 ********* 2026-03-10 00:22:08.093211 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-10 00:22:08.093230 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-10 00:22:08.093247 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-10 00:22:08.093263 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-10 00:22:08.093280 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-10 00:22:08.093298 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-10 00:22:08.093316 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-10 00:22:08.093336 | orchestrator | 2026-03-10 00:22:08.093356 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-10 00:22:08.093378 | orchestrator | Tuesday 10 March 2026 00:22:03 +0000 (0:00:05.440) 0:00:19.804 ********* 2026-03-10 00:22:08.093393 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-10 00:22:08.093409 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-10 00:22:08.093422 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-10 00:22:08.093435 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-10 00:22:08.093448 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-10 00:22:08.093461 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-10 00:22:08.093474 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-10 00:22:08.093486 | orchestrator | 2026-03-10 00:22:08.093499 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-10 00:22:08.093512 | orchestrator | Tuesday 10 March 2026 00:22:03 +0000 (0:00:00.192) 0:00:19.997 ********* 2026-03-10 00:22:08.093525 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIItt6pIbDG6F8pMhC9xV9Q8DONv1ZWCTqAY0WTa6Alz3) 2026-03-10 00:22:08.093589 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC76//Y2ZluUHfbtANb69e08ziwZIIA1CHrmPR3KEtOu0o4M5yu47cCMtD+WEZgJQda8t/Z4gbEFpmJnK6SZ/AErB7P0cYVAdzSUDaD11qT5yCKRLcj9DTV9CuR+vT/e3bczPmwf0e70e1NFr1XrOrWlM20qWwNbRVXTce3197Wib9pidrVt0TPykUS84aEEV9jbYYNGdkVU4JR0FrQVUFuw9aJNKbnAXq7hxKS05b3DMe/1mppoU+UFOOi8nyT2pmRkUoKyfFnWpkbd2ZYqjqVsNm5QXKJbBcWwj4cPqQmwdWgLKS2rNtsiPTWL0e0G8EyC5RVdBZ5e3+qu0kbe2PL7Gn4mojXr2syZvTW0fNwx2mwHjUFqtIo+n+PhKpk3YOyDg6NAiPDirzs8JH0bJNqMfkBoS8SlN9iZBB7tUgyUCWUjhglOG3k+OjPbcuwITNxFlbTEUU4yOnMPc7t83+FDXIQjOA4hg46NN+SnPIs6jB/1rLcx8RgdLVOUjW7rnU=) 2026-03-10 00:22:08.093609 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJpPm7nXhsEFly5XJRJXftxKHAR5D2c9Qv5+lbf0/Hkm2oETMmb9fPtP2rP0QW2dJWftWrcnqXxnpFcgzu7hyjk=) 2026-03-10 00:22:08.093643 | orchestrator | 2026-03-10 00:22:08.093663 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-10 00:22:08.093681 | orchestrator | Tuesday 10 March 2026 00:22:04 +0000 (0:00:01.115) 0:00:21.113 ********* 2026-03-10 00:22:08.093702 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBCkONImnnNhzXXoIFUG3Z58o6Yq/Af73n28gNwKRqhq) 2026-03-10 00:22:08.093715 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2Fm73x7NZzMhN345Y/ECp3xrMefFizkfscxk0ZtKjewi1EJ9N9VVNfrWtxBZlEsaBou4o+XhIkH2je8iPlXipUfYEX+tXTIj2EZSnvGmr/3QAyy423f3jDmOE0Fw0Ne3CKsNSgHk6btt66UN1A4LvDDnyb738upRFFodtm6/SSeWJsoLhgXLCP+KLnDpFzIo3biWNem7i3DD4cAwAoJiVNNT6+mAM+4VhLODrxRilWq8W5ay3PlFZXVDhSfz6bgoh8GbI8N2aWLSv/YjSVhJZ/O3BTmXJfA+/dtBRv2BbEixUuFHY5kxR9358rjdugsmseyt4EW0Z3t7x/YYMuQm9O9HHLkUqpmkyKEYO1UuUUAlrQnBWCe4CVEd0qXSOLHR/veXZYlmg5vZjaegAVf5YrXRq5wyUoCbEuPogacL1srJatV1lOnSdy+rfx5L33T2IoN7+5njOKg9mCQLGb9D+gTWI/SyOg0lvbcYxlmR20jATn8j0QVRYbze9lUB25CU=) 2026-03-10 00:22:08.093726 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMJGjJADz8hZFfSU1YllWMZwdBCTQO8F/dqOihoDNd28+xfzj34JHwI/NwanWo4aHccqobTeN/FNRWxwv/ufzFk=) 2026-03-10 00:22:08.093737 | orchestrator | 2026-03-10 00:22:08.093747 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-10 00:22:08.093758 | orchestrator | Tuesday 10 March 2026 00:22:05 +0000 (0:00:01.088) 0:00:22.201 ********* 2026-03-10 00:22:08.093769 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDVVOpVX09sVKCQfCReNX16ZOwCuFx6IEHoHnxPFyES0ZtshMVVJC1n6tGc1O6JDe3Ln7U6q/nMU5AQwSiSe1yO448nEF6w6FSJ/HbQnb4Xpys6UMVojlQt52B9nDwEshEHTSOL5tu2Bs1IjiSV0SUe2N4PpkRahnZ6woaPNO+kZq/HkUu2uRwr0G9iS8rWWFcf96eS6DeDYkFYZOCJLcIwekgwJvsW4hhyeR6hD1qJGO4c6M/a+BW+ODqPJW/y4yB1aB+kMEyIolhs5DMC68zpA38dEjqfy7OO4vi7jA7wwm+Lm9LloTZr2vZA5V6s6Z11OTfylwYrvLMwP3iMjAL8LlzPAAgpcJTIEzgwO4VkXuYwdvMOG4YABcaE08pAQKsLTMLcoyUXK5QVgy5NyNc4KgpuKTja3CkxsOhBlypIu9HWu8K+08gWcPmfWvyPf6y2deoEbxHanOv43B4oooZi5SljJKYxJRzWzorrPhkS3yWTIsS5UkOOFaN/kmRT5gE=) 2026-03-10 00:22:08.093780 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAArLeakGn6K/CgYnINIzMVFbMKG0AjewlsJBP81rjTqJyuZC4Cet+6QGLUn6g9kD4jymJwXWivwLGLYF8R2XFY=) 2026-03-10 00:22:08.093791 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPVmHT0x+IS32BWwbMfbI+ClkoJc1gUEs/NojrjhZHin) 2026-03-10 00:22:08.093802 | orchestrator | 2026-03-10 00:22:08.093813 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-10 00:22:08.093823 | orchestrator | Tuesday 10 March 2026 00:22:06 +0000 (0:00:01.215) 0:00:23.417 ********* 2026-03-10 00:22:08.093834 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC96sWwUB8tCa7M312lZhsvKrh0+qqYzOaUsiaLabkwdBO97bpvkXV4R7sIxahAldMFojEv53QJCBOzJzA75MOOs2Sm4vA+H/3Dytu1RXwAiuTLSIrn4KljTZKeH+afM2nJKVQp4OtvY4usJqbX6ES2EQ61ktbfHHw54fBa7zo6B9Tcr6cK3sCYPnWTiHqyYTcmSqf7tduD7Ejvk02pBY1E2535Mu3DACAjPEsRtK1cMXYxiZRXWqg6vBouzenR04Bo07X7ZfmfvNEYT3BJP5BCo8mAQdJWFsDO3NAX9s+j/XvAn+445xVLalBs9QiHxCITGodBKN94haBgwUX8mG2/Mu3R1PEyfVxuyJowcgKvZ8Qa//KfRzlgSfpsmWtn/0kp7szudL2Ze5VKZMdseNWTb1A9ryNGBGtWQcdVioZtjzw46wUBftUZeaPu08V+AAcbl9IImrGJLLPdGqnZxh9xutyaYQnmV3mSFagwhFt1rmq5BsMNsEMglhsH06zyehE=) 2026-03-10 00:22:08.093845 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBF/DMGxkkWOWx718Y8wYCHtypMedpBN9eNB/rs/Lrs+WU07s2HcAn6CyRuxRBT/VG/it+aBMs0BU8kyQJ72wL4U=) 2026-03-10 00:22:08.093868 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMFWC+PMc1Hi651uRr9HF8lDBuwjypL08bPYhcHhO4E5) 2026-03-10 00:22:12.312131 | orchestrator | 2026-03-10 00:22:12.312220 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-10 00:22:12.312236 | orchestrator | Tuesday 10 March 2026 00:22:08 +0000 (0:00:01.180) 0:00:24.598 ********* 2026-03-10 00:22:12.312251 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCscdio1KN2s/WPn0Y07+eVthnX6xXTkxjz0WemsoDZvNPIC8sDyDtVihaWtgR/J6X/9ghcXbRB6+qcNeCJRhJzWSziHuuSg1pZMnGg8cOnDA5KyTxK0OSksTRgQ9yVbRTohpKPF1JjdrSd/YHxbW6QtUrG7rJIRJ7glXhd33YjfKb8EpYXscO8zmuvbz8xPUWj7Rh/1WGVYK1hP64MJUGYmK+j2iZEMBB4LzOi8KJtOEMIRk1qTtjFE8mwvRaY+nldqaNkzLal7dYvX4E743cyCXMbMNo+gmQIC07tDdoIJIiRjb3ehBFqnkO6gd+cewN67Eu2MDzKWY7hPfJN3NZw7ZtVZosO8AVlNjFn4XZtx0imPefCo6arkDegNVQupUFTR3tjbiQaKv2vI+9nT5danP6oeMvjQSFV8lLMKsEV3cW4QZuT4qswq37J54ZN3QXeRpR41wyHa8uZ98a1LjcFyFna1YLKjhLbNCFVM9AT3NjL+48gnZnCiwtaRovzfyc=) 2026-03-10 00:22:12.312265 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGvJrU9Q7EZFCaWHcbvqfm18pJAFICyfrQFRBg2QZ9clKn3BRoMtRalYX+/wbOq+TzRw6VGvMmtmYqKoDgnE35k=) 2026-03-10 00:22:12.312278 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICmQWLd4tmTzC7s+usY4w/4Jm+ZlNQOIPsFFsXC5F6Hv) 2026-03-10 00:22:12.312289 | orchestrator | 2026-03-10 00:22:12.312300 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-10 00:22:12.312311 | orchestrator | Tuesday 10 March 2026 00:22:09 +0000 (0:00:01.083) 0:00:25.682 ********* 2026-03-10 00:22:12.312322 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4zsiwAKduN8hxX0eHIMZdDLeEE5/onVZO+D9/0sG2iRAbY5DHv3ocy1+eVnC9N1mVELrJb6IwxnztdWsK2extcEuytyyNWsKjr1mg3n5rzXbPYRPxrcAWGUtbyYEVGb6AFJFk1syU+fzHeSHviwSBd/diXj9gO7SDeFy2aI4APzLR4gIc1pVm3NHtkhw0KWuZMWe6yQGNzhOooMT43s13os1rcvhjafMVeJHn2ozAaV3FV5TIYecHXOlQDDV16dhZyv2oEs4xsqWgwj2NgWaiZ0b7sXi/gx6go0XoqoMZLb2ka0E8jddSDsDwztShSz5Hob8ucnj1fU5EX3wmWKj8xD4a+bkazwLwk/t4pxryivdH8EIi/wOF8I8SW1Rkl1QnVwXT8fwWVEuzQpCoa2W6V21tjeC3Od1gkE7VMur+IyM5UPBzGexDR+kjcZiEpHnG5nr+7S/JOOnKXjeQ5jTcfQFpV3mF+pRh3YAw+1MtLnIoPpRS0iOMznSfRTCGiOk=) 2026-03-10 00:22:12.312334 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA87TZ2O18AXyHKgM+DrH3oZYPvrZCIA30twWXlXOyFeWMbQf8uCoLWBFk69rvANFMZnCg3Hi8JtQjmlfkeisOA=) 2026-03-10 00:22:12.312358 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDWczBwMz7SB864B0sBHhCBEYN3fUXRJXhU2n/bOrRBM) 2026-03-10 00:22:12.312369 | orchestrator | 2026-03-10 00:22:12.312380 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-10 00:22:12.312391 | orchestrator | Tuesday 10 March 2026 00:22:10 +0000 (0:00:01.136) 0:00:26.818 ********* 2026-03-10 00:22:12.312420 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCUrxqsy6XHwxRZho0AHH2jUnCwplKEh4bxjSJdaB9dMn0ajh3q0oL5WggXbnR+I8p3A/A+OWwMyoTo4az1btZfptvZh60GUM2csaeP1voEQd8Nl26Azn8XwON7BPs6ZkBNp7UZjWYXbBoAzLk4yNfRvreoBfNSVBlbeg0iB2B2tfmOZ5P4PoNnldWGfEpEsPGqVhskL66RlJJh6fNp9zxxVB6p6o/vqIuVdgLb5A+Rcy9uX71hHK55Qkdux8C5+C+IUKrQjwtsg0oyT/144hBffthLIzop/V2Io2lLS7374cNO+IGRU2gpTKW80x+wWGDu3AqfGa84GAmlVp9RyzleAtkNonbId5SMjTjJMpso7ks2HdlKXsT9jJ1QKMStIfYatx0IgiND8sAL3ndrGFbJGtdYEmQFwNwq5DYDn037X25/4PSfUxtNovUHd0FlDgNJ2Lu0RDMRa//cjuQHYrDxPxVQ6bFBpmI0CU8OOzggJRD4l0Nc9mAyBl6B888X180=) 2026-03-10 00:22:12.312432 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPzwZBXQpsuEN1wlrZP/zQ9lNrFV3o+//fajtXfpfuZuEtwCV00WvAM3ZGkNqI+AYbyECTvbuJmUASBfNDTsrkw=) 2026-03-10 00:22:12.312443 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGyLDLzSk49QsYBvjUJvA1mTJw3cKJv7UBaCqG8AUx5p) 2026-03-10 00:22:12.312454 | orchestrator | 2026-03-10 00:22:12.312465 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-10 00:22:12.312498 | orchestrator | Tuesday 10 March 2026 00:22:11 +0000 (0:00:01.001) 0:00:27.820 ********* 2026-03-10 00:22:12.312510 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-10 00:22:12.312521 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-10 00:22:12.312531 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-10 00:22:12.312542 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-10 00:22:12.312552 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-10 00:22:12.312563 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-10 00:22:12.312574 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-10 00:22:12.312584 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:22:12.312595 | orchestrator | 2026-03-10 00:22:12.312621 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-10 00:22:12.312632 | orchestrator | Tuesday 10 March 2026 00:22:11 +0000 (0:00:00.146) 0:00:27.966 ********* 2026-03-10 00:22:12.312643 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:22:12.312653 | orchestrator | 2026-03-10 00:22:12.312664 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-10 00:22:12.312675 | orchestrator | Tuesday 10 March 2026 00:22:11 +0000 (0:00:00.048) 0:00:28.015 ********* 2026-03-10 00:22:12.312693 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:22:12.312706 | orchestrator | 2026-03-10 00:22:12.312718 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-10 00:22:12.312730 | orchestrator | Tuesday 10 March 2026 00:22:11 +0000 (0:00:00.061) 0:00:28.076 ********* 2026-03-10 00:22:12.312742 | orchestrator | changed: [testbed-manager] 2026-03-10 00:22:12.312754 | orchestrator | 2026-03-10 00:22:12.312765 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:22:12.312777 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-10 00:22:12.312791 | orchestrator | 2026-03-10 00:22:12.312803 | orchestrator | 2026-03-10 00:22:12.312814 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:22:12.312826 | orchestrator | Tuesday 10 March 2026 00:22:12 +0000 (0:00:00.603) 0:00:28.680 ********* 2026-03-10 00:22:12.312838 | orchestrator | =============================================================================== 2026-03-10 00:22:12.312850 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.07s 2026-03-10 00:22:12.312862 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.44s 2026-03-10 00:22:12.312875 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.22s 2026-03-10 00:22:12.312887 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2026-03-10 00:22:12.312899 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2026-03-10 00:22:12.312973 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2026-03-10 00:22:12.312985 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2026-03-10 00:22:12.312996 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-03-10 00:22:12.313006 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-03-10 00:22:12.313017 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-03-10 00:22:12.313028 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-03-10 00:22:12.313038 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-03-10 00:22:12.313049 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-03-10 00:22:12.313059 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-03-10 00:22:12.313078 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-03-10 00:22:12.313089 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-03-10 00:22:12.313100 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.60s 2026-03-10 00:22:12.313110 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.19s 2026-03-10 00:22:12.313121 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.15s 2026-03-10 00:22:12.313132 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.15s 2026-03-10 00:22:12.510486 | orchestrator | + osism apply squid 2026-03-10 00:22:24.347883 | orchestrator | 2026-03-10 00:22:24 | INFO  | Task 90d14857-cb46-493a-9383-2d0a66bb61de (squid) was prepared for execution. 2026-03-10 00:22:24.348060 | orchestrator | 2026-03-10 00:22:24 | INFO  | It takes a moment until task 90d14857-cb46-493a-9383-2d0a66bb61de (squid) has been started and output is visible here. 2026-03-10 00:24:23.276156 | orchestrator | 2026-03-10 00:24:23.276273 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-10 00:24:23.276289 | orchestrator | 2026-03-10 00:24:23.276300 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-10 00:24:23.276311 | orchestrator | Tuesday 10 March 2026 00:22:28 +0000 (0:00:00.161) 0:00:00.161 ********* 2026-03-10 00:24:23.276321 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-10 00:24:23.276331 | orchestrator | 2026-03-10 00:24:23.276341 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-10 00:24:23.276351 | orchestrator | Tuesday 10 March 2026 00:22:28 +0000 (0:00:00.083) 0:00:00.245 ********* 2026-03-10 00:24:23.276360 | orchestrator | ok: [testbed-manager] 2026-03-10 00:24:23.276371 | orchestrator | 2026-03-10 00:24:23.276381 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-10 00:24:23.276390 | orchestrator | Tuesday 10 March 2026 00:22:30 +0000 (0:00:01.489) 0:00:01.734 ********* 2026-03-10 00:24:23.276401 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-10 00:24:23.276412 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-10 00:24:23.276430 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-10 00:24:23.276453 | orchestrator | 2026-03-10 00:24:23.276479 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-10 00:24:23.276496 | orchestrator | Tuesday 10 March 2026 00:22:31 +0000 (0:00:01.161) 0:00:02.895 ********* 2026-03-10 00:24:23.276510 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-10 00:24:23.276524 | orchestrator | 2026-03-10 00:24:23.276537 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-10 00:24:23.276551 | orchestrator | Tuesday 10 March 2026 00:22:32 +0000 (0:00:01.126) 0:00:04.021 ********* 2026-03-10 00:24:23.276566 | orchestrator | ok: [testbed-manager] 2026-03-10 00:24:23.276580 | orchestrator | 2026-03-10 00:24:23.276595 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-10 00:24:23.276611 | orchestrator | Tuesday 10 March 2026 00:22:32 +0000 (0:00:00.362) 0:00:04.383 ********* 2026-03-10 00:24:23.276627 | orchestrator | changed: [testbed-manager] 2026-03-10 00:24:23.276642 | orchestrator | 2026-03-10 00:24:23.276656 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-10 00:24:23.276669 | orchestrator | Tuesday 10 March 2026 00:22:33 +0000 (0:00:00.925) 0:00:05.309 ********* 2026-03-10 00:24:23.276682 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-10 00:24:23.276702 | orchestrator | ok: [testbed-manager] 2026-03-10 00:24:23.276718 | orchestrator | 2026-03-10 00:24:23.276734 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-10 00:24:23.276771 | orchestrator | Tuesday 10 March 2026 00:23:06 +0000 (0:00:32.873) 0:00:38.183 ********* 2026-03-10 00:24:23.276780 | orchestrator | changed: [testbed-manager] 2026-03-10 00:24:23.276789 | orchestrator | 2026-03-10 00:24:23.276798 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-10 00:24:23.276808 | orchestrator | Tuesday 10 March 2026 00:23:22 +0000 (0:00:15.730) 0:00:53.913 ********* 2026-03-10 00:24:23.276817 | orchestrator | Pausing for 60 seconds 2026-03-10 00:24:23.276826 | orchestrator | changed: [testbed-manager] 2026-03-10 00:24:23.276835 | orchestrator | 2026-03-10 00:24:23.276844 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-10 00:24:23.276851 | orchestrator | Tuesday 10 March 2026 00:24:22 +0000 (0:01:00.090) 0:01:54.004 ********* 2026-03-10 00:24:23.276859 | orchestrator | ok: [testbed-manager] 2026-03-10 00:24:23.276867 | orchestrator | 2026-03-10 00:24:23.276874 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-10 00:24:23.276882 | orchestrator | Tuesday 10 March 2026 00:24:22 +0000 (0:00:00.063) 0:01:54.067 ********* 2026-03-10 00:24:23.276890 | orchestrator | changed: [testbed-manager] 2026-03-10 00:24:23.276928 | orchestrator | 2026-03-10 00:24:23.276936 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:24:23.276944 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:24:23.276952 | orchestrator | 2026-03-10 00:24:23.276959 | orchestrator | 2026-03-10 00:24:23.276968 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:24:23.276975 | orchestrator | Tuesday 10 March 2026 00:24:23 +0000 (0:00:00.643) 0:01:54.710 ********* 2026-03-10 00:24:23.276983 | orchestrator | =============================================================================== 2026-03-10 00:24:23.277007 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-03-10 00:24:23.277015 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.87s 2026-03-10 00:24:23.277023 | orchestrator | osism.services.squid : Restart squid service --------------------------- 15.73s 2026-03-10 00:24:23.277030 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.49s 2026-03-10 00:24:23.277038 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.16s 2026-03-10 00:24:23.277045 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.13s 2026-03-10 00:24:23.277053 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.93s 2026-03-10 00:24:23.277060 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.64s 2026-03-10 00:24:23.277068 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.36s 2026-03-10 00:24:23.277076 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-03-10 00:24:23.277083 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2026-03-10 00:24:23.583070 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-10 00:24:23.583225 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-10 00:24:23.625879 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-10 00:24:23.625980 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-03-10 00:24:23.631706 | orchestrator | + set -e 2026-03-10 00:24:23.631770 | orchestrator | + NAMESPACE=kolla/release 2026-03-10 00:24:23.631780 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-10 00:24:23.636291 | orchestrator | ++ semver 9.5.0 9.0.0 2026-03-10 00:24:23.707288 | orchestrator | + [[ 1 -lt 0 ]] 2026-03-10 00:24:23.708283 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-10 00:24:35.774943 | orchestrator | 2026-03-10 00:24:35 | INFO  | Task 9d0d2026-9ab2-4151-b932-9b3c901e11e6 (operator) was prepared for execution. 2026-03-10 00:24:35.775055 | orchestrator | 2026-03-10 00:24:35 | INFO  | It takes a moment until task 9d0d2026-9ab2-4151-b932-9b3c901e11e6 (operator) has been started and output is visible here. 2026-03-10 00:24:52.732012 | orchestrator | 2026-03-10 00:24:52.732127 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-10 00:24:52.732144 | orchestrator | 2026-03-10 00:24:52.732156 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-10 00:24:52.732168 | orchestrator | Tuesday 10 March 2026 00:24:39 +0000 (0:00:00.144) 0:00:00.145 ********* 2026-03-10 00:24:52.732179 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:24:52.732191 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:24:52.732202 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:24:52.732212 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:24:52.732223 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:24:52.732234 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:24:52.732245 | orchestrator | 2026-03-10 00:24:52.732256 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-10 00:24:52.732267 | orchestrator | Tuesday 10 March 2026 00:24:44 +0000 (0:00:04.278) 0:00:04.423 ********* 2026-03-10 00:24:52.732278 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:24:52.732303 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:24:52.732325 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:24:52.732348 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:24:52.732359 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:24:52.732370 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:24:52.732380 | orchestrator | 2026-03-10 00:24:52.732391 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-10 00:24:52.732402 | orchestrator | 2026-03-10 00:24:52.732413 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-10 00:24:52.732424 | orchestrator | Tuesday 10 March 2026 00:24:44 +0000 (0:00:00.802) 0:00:05.226 ********* 2026-03-10 00:24:52.732435 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:24:52.732446 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:24:52.732456 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:24:52.732467 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:24:52.732478 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:24:52.732490 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:24:52.732503 | orchestrator | 2026-03-10 00:24:52.732516 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-10 00:24:52.732529 | orchestrator | Tuesday 10 March 2026 00:24:45 +0000 (0:00:00.167) 0:00:05.393 ********* 2026-03-10 00:24:52.732541 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:24:52.732554 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:24:52.732566 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:24:52.732578 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:24:52.732591 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:24:52.732603 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:24:52.732614 | orchestrator | 2026-03-10 00:24:52.732625 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-10 00:24:52.732636 | orchestrator | Tuesday 10 March 2026 00:24:45 +0000 (0:00:00.177) 0:00:05.571 ********* 2026-03-10 00:24:52.732647 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:24:52.732659 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:24:52.732670 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:24:52.732680 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:24:52.732691 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:24:52.732702 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:24:52.732713 | orchestrator | 2026-03-10 00:24:52.732723 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-10 00:24:52.732734 | orchestrator | Tuesday 10 March 2026 00:24:45 +0000 (0:00:00.624) 0:00:06.196 ********* 2026-03-10 00:24:52.732745 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:24:52.732756 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:24:52.732767 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:24:52.732777 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:24:52.732788 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:24:52.732799 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:24:52.732831 | orchestrator | 2026-03-10 00:24:52.732842 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-10 00:24:52.732853 | orchestrator | Tuesday 10 March 2026 00:24:46 +0000 (0:00:00.875) 0:00:07.071 ********* 2026-03-10 00:24:52.732864 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-10 00:24:52.732875 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-10 00:24:52.732906 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-10 00:24:52.732918 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-10 00:24:52.732928 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-10 00:24:52.732939 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-10 00:24:52.732949 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-10 00:24:52.732960 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-10 00:24:52.732970 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-10 00:24:52.732981 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-10 00:24:52.732991 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-10 00:24:52.733002 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-10 00:24:52.733012 | orchestrator | 2026-03-10 00:24:52.733023 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-10 00:24:52.733034 | orchestrator | Tuesday 10 March 2026 00:24:47 +0000 (0:00:01.195) 0:00:08.266 ********* 2026-03-10 00:24:52.733045 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:24:52.733055 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:24:52.733065 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:24:52.733076 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:24:52.733087 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:24:52.733097 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:24:52.733108 | orchestrator | 2026-03-10 00:24:52.733119 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-10 00:24:52.733131 | orchestrator | Tuesday 10 March 2026 00:24:49 +0000 (0:00:01.217) 0:00:09.484 ********* 2026-03-10 00:24:52.733141 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-10 00:24:52.733152 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-10 00:24:52.733163 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-10 00:24:52.733173 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-10 00:24:52.733202 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-10 00:24:52.733213 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-10 00:24:52.733224 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-10 00:24:52.733235 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-10 00:24:52.733245 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-10 00:24:52.733256 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-10 00:24:52.733266 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-10 00:24:52.733277 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-10 00:24:52.733287 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-10 00:24:52.733298 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-10 00:24:52.733308 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-10 00:24:52.733319 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-10 00:24:52.733329 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-10 00:24:52.733340 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-10 00:24:52.733350 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-10 00:24:52.733361 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-10 00:24:52.733379 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-10 00:24:52.733390 | orchestrator | 2026-03-10 00:24:52.733400 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-10 00:24:52.733412 | orchestrator | Tuesday 10 March 2026 00:24:50 +0000 (0:00:01.244) 0:00:10.729 ********* 2026-03-10 00:24:52.733422 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:24:52.733433 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:24:52.733443 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:24:52.733454 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:24:52.733464 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:24:52.733475 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:24:52.733485 | orchestrator | 2026-03-10 00:24:52.733496 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-10 00:24:52.733507 | orchestrator | Tuesday 10 March 2026 00:24:50 +0000 (0:00:00.171) 0:00:10.901 ********* 2026-03-10 00:24:52.733517 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:24:52.733528 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:24:52.733539 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:24:52.733549 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:24:52.733560 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:24:52.733570 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:24:52.733580 | orchestrator | 2026-03-10 00:24:52.733591 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-10 00:24:52.733602 | orchestrator | Tuesday 10 March 2026 00:24:50 +0000 (0:00:00.174) 0:00:11.075 ********* 2026-03-10 00:24:52.733612 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:24:52.733623 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:24:52.733634 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:24:52.733644 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:24:52.733654 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:24:52.733665 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:24:52.733675 | orchestrator | 2026-03-10 00:24:52.733686 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-10 00:24:52.733696 | orchestrator | Tuesday 10 March 2026 00:24:51 +0000 (0:00:00.660) 0:00:11.735 ********* 2026-03-10 00:24:52.733707 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:24:52.733717 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:24:52.733728 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:24:52.733738 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:24:52.733765 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:24:52.733777 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:24:52.733787 | orchestrator | 2026-03-10 00:24:52.733798 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-10 00:24:52.733809 | orchestrator | Tuesday 10 March 2026 00:24:51 +0000 (0:00:00.153) 0:00:11.889 ********* 2026-03-10 00:24:52.733820 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-10 00:24:52.733831 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:24:52.733841 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-10 00:24:52.733852 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-10 00:24:52.733863 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-10 00:24:52.733873 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:24:52.733904 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:24:52.733915 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:24:52.733925 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-10 00:24:52.733936 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:24:52.733947 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-10 00:24:52.733958 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:24:52.733968 | orchestrator | 2026-03-10 00:24:52.733979 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-10 00:24:52.733990 | orchestrator | Tuesday 10 March 2026 00:24:52 +0000 (0:00:00.754) 0:00:12.643 ********* 2026-03-10 00:24:52.734008 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:24:52.734058 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:24:52.734069 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:24:52.734080 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:24:52.734091 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:24:52.734101 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:24:52.734112 | orchestrator | 2026-03-10 00:24:52.734123 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-10 00:24:52.734133 | orchestrator | Tuesday 10 March 2026 00:24:52 +0000 (0:00:00.180) 0:00:12.823 ********* 2026-03-10 00:24:52.734144 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:24:52.734155 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:24:52.734166 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:24:52.734176 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:24:52.734195 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:24:54.023827 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:24:54.023979 | orchestrator | 2026-03-10 00:24:54.024000 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-10 00:24:54.024013 | orchestrator | Tuesday 10 March 2026 00:24:52 +0000 (0:00:00.171) 0:00:12.995 ********* 2026-03-10 00:24:54.024025 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:24:54.024036 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:24:54.024046 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:24:54.024057 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:24:54.024068 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:24:54.024079 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:24:54.024089 | orchestrator | 2026-03-10 00:24:54.024101 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-10 00:24:54.024112 | orchestrator | Tuesday 10 March 2026 00:24:52 +0000 (0:00:00.163) 0:00:13.159 ********* 2026-03-10 00:24:54.024122 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:24:54.024133 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:24:54.024162 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:24:54.024174 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:24:54.024184 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:24:54.024195 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:24:54.024205 | orchestrator | 2026-03-10 00:24:54.024216 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-10 00:24:54.024227 | orchestrator | Tuesday 10 March 2026 00:24:53 +0000 (0:00:00.660) 0:00:13.819 ********* 2026-03-10 00:24:54.024237 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:24:54.024248 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:24:54.024260 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:24:54.024270 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:24:54.024281 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:24:54.024292 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:24:54.024303 | orchestrator | 2026-03-10 00:24:54.024314 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:24:54.024326 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-10 00:24:54.024339 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-10 00:24:54.024352 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-10 00:24:54.024365 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-10 00:24:54.024378 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-10 00:24:54.024417 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-10 00:24:54.024429 | orchestrator | 2026-03-10 00:24:54.024442 | orchestrator | 2026-03-10 00:24:54.024455 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:24:54.024468 | orchestrator | Tuesday 10 March 2026 00:24:53 +0000 (0:00:00.227) 0:00:14.046 ********* 2026-03-10 00:24:54.024480 | orchestrator | =============================================================================== 2026-03-10 00:24:54.024493 | orchestrator | Gathering Facts --------------------------------------------------------- 4.28s 2026-03-10 00:24:54.024505 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.24s 2026-03-10 00:24:54.024518 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.22s 2026-03-10 00:24:54.024531 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.20s 2026-03-10 00:24:54.024544 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.88s 2026-03-10 00:24:54.024556 | orchestrator | Do not require tty for all users ---------------------------------------- 0.80s 2026-03-10 00:24:54.024569 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.75s 2026-03-10 00:24:54.024581 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.66s 2026-03-10 00:24:54.024594 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.66s 2026-03-10 00:24:54.024606 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.62s 2026-03-10 00:24:54.024618 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.23s 2026-03-10 00:24:54.024630 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.18s 2026-03-10 00:24:54.024644 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.18s 2026-03-10 00:24:54.024656 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.17s 2026-03-10 00:24:54.024667 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.17s 2026-03-10 00:24:54.024678 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.17s 2026-03-10 00:24:54.024688 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2026-03-10 00:24:54.024699 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2026-03-10 00:24:54.024710 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.15s 2026-03-10 00:24:54.351049 | orchestrator | + osism apply --environment custom facts 2026-03-10 00:24:56.289438 | orchestrator | 2026-03-10 00:24:56 | INFO  | Trying to run play facts in environment custom 2026-03-10 00:25:06.488375 | orchestrator | 2026-03-10 00:25:06 | INFO  | Task d1b31402-3897-4c6d-ae3a-1f9f4e9ac6b1 (facts) was prepared for execution. 2026-03-10 00:25:06.488522 | orchestrator | 2026-03-10 00:25:06 | INFO  | It takes a moment until task d1b31402-3897-4c6d-ae3a-1f9f4e9ac6b1 (facts) has been started and output is visible here. 2026-03-10 00:25:50.646851 | orchestrator | 2026-03-10 00:25:50.647042 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-10 00:25:50.647064 | orchestrator | 2026-03-10 00:25:50.647081 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-10 00:25:50.647097 | orchestrator | Tuesday 10 March 2026 00:25:10 +0000 (0:00:00.084) 0:00:00.084 ********* 2026-03-10 00:25:50.647115 | orchestrator | ok: [testbed-manager] 2026-03-10 00:25:50.647133 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:25:50.647149 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:25:50.647164 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:25:50.647181 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:25:50.647198 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:25:50.647246 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:25:50.647257 | orchestrator | 2026-03-10 00:25:50.647267 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-10 00:25:50.647277 | orchestrator | Tuesday 10 March 2026 00:25:11 +0000 (0:00:01.381) 0:00:01.465 ********* 2026-03-10 00:25:50.647286 | orchestrator | ok: [testbed-manager] 2026-03-10 00:25:50.647296 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:25:50.647310 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:25:50.647325 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:25:50.647342 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:25:50.647359 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:25:50.647376 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:25:50.647394 | orchestrator | 2026-03-10 00:25:50.647412 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-10 00:25:50.647430 | orchestrator | 2026-03-10 00:25:50.647442 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-10 00:25:50.647453 | orchestrator | Tuesday 10 March 2026 00:25:13 +0000 (0:00:01.261) 0:00:02.727 ********* 2026-03-10 00:25:50.647463 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:25:50.647475 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:25:50.647486 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:25:50.647496 | orchestrator | 2026-03-10 00:25:50.647507 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-10 00:25:50.647519 | orchestrator | Tuesday 10 March 2026 00:25:13 +0000 (0:00:00.109) 0:00:02.837 ********* 2026-03-10 00:25:50.647530 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:25:50.647540 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:25:50.647551 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:25:50.647561 | orchestrator | 2026-03-10 00:25:50.647572 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-10 00:25:50.647583 | orchestrator | Tuesday 10 March 2026 00:25:13 +0000 (0:00:00.244) 0:00:03.081 ********* 2026-03-10 00:25:50.647593 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:25:50.647604 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:25:50.647614 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:25:50.647624 | orchestrator | 2026-03-10 00:25:50.647633 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-10 00:25:50.647644 | orchestrator | Tuesday 10 March 2026 00:25:13 +0000 (0:00:00.216) 0:00:03.297 ********* 2026-03-10 00:25:50.647655 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:25:50.647666 | orchestrator | 2026-03-10 00:25:50.647675 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-10 00:25:50.647685 | orchestrator | Tuesday 10 March 2026 00:25:13 +0000 (0:00:00.146) 0:00:03.443 ********* 2026-03-10 00:25:50.647694 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:25:50.647703 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:25:50.647713 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:25:50.647722 | orchestrator | 2026-03-10 00:25:50.647731 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-10 00:25:50.647741 | orchestrator | Tuesday 10 March 2026 00:25:14 +0000 (0:00:00.448) 0:00:03.892 ********* 2026-03-10 00:25:50.647750 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:25:50.647760 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:25:50.647769 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:25:50.647783 | orchestrator | 2026-03-10 00:25:50.647799 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-10 00:25:50.647817 | orchestrator | Tuesday 10 March 2026 00:25:14 +0000 (0:00:00.140) 0:00:04.032 ********* 2026-03-10 00:25:50.647833 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:25:50.647850 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:25:50.647921 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:25:50.647940 | orchestrator | 2026-03-10 00:25:50.647956 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-10 00:25:50.647984 | orchestrator | Tuesday 10 March 2026 00:25:15 +0000 (0:00:01.063) 0:00:05.096 ********* 2026-03-10 00:25:50.647994 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:25:50.648004 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:25:50.648013 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:25:50.648023 | orchestrator | 2026-03-10 00:25:50.648032 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-10 00:25:50.648087 | orchestrator | Tuesday 10 March 2026 00:25:16 +0000 (0:00:00.492) 0:00:05.588 ********* 2026-03-10 00:25:50.648098 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:25:50.648108 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:25:50.648117 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:25:50.648126 | orchestrator | 2026-03-10 00:25:50.648136 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-10 00:25:50.648145 | orchestrator | Tuesday 10 March 2026 00:25:17 +0000 (0:00:01.227) 0:00:06.816 ********* 2026-03-10 00:25:50.648155 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:25:50.648164 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:25:50.648173 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:25:50.648183 | orchestrator | 2026-03-10 00:25:50.648192 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-10 00:25:50.648202 | orchestrator | Tuesday 10 March 2026 00:25:33 +0000 (0:00:16.437) 0:00:23.253 ********* 2026-03-10 00:25:50.648211 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:25:50.648220 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:25:50.648230 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:25:50.648239 | orchestrator | 2026-03-10 00:25:50.648249 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-10 00:25:50.648277 | orchestrator | Tuesday 10 March 2026 00:25:33 +0000 (0:00:00.096) 0:00:23.350 ********* 2026-03-10 00:25:50.648288 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:25:50.648297 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:25:50.648306 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:25:50.648316 | orchestrator | 2026-03-10 00:25:50.648326 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-10 00:25:50.648347 | orchestrator | Tuesday 10 March 2026 00:25:41 +0000 (0:00:07.669) 0:00:31.019 ********* 2026-03-10 00:25:50.648357 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:25:50.648367 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:25:50.648376 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:25:50.648386 | orchestrator | 2026-03-10 00:25:50.648395 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-10 00:25:50.648405 | orchestrator | Tuesday 10 March 2026 00:25:41 +0000 (0:00:00.458) 0:00:31.477 ********* 2026-03-10 00:25:50.648414 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-10 00:25:50.648425 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-10 00:25:50.648435 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-10 00:25:50.648444 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-10 00:25:50.648453 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-10 00:25:50.648463 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-10 00:25:50.648472 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-10 00:25:50.648482 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-10 00:25:50.648491 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-10 00:25:50.648501 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-10 00:25:50.648510 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-10 00:25:50.648519 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-10 00:25:50.648529 | orchestrator | 2026-03-10 00:25:50.648538 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-10 00:25:50.648554 | orchestrator | Tuesday 10 March 2026 00:25:45 +0000 (0:00:03.644) 0:00:35.122 ********* 2026-03-10 00:25:50.648564 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:25:50.648573 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:25:50.648582 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:25:50.648592 | orchestrator | 2026-03-10 00:25:50.648601 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-10 00:25:50.648610 | orchestrator | 2026-03-10 00:25:50.648620 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-10 00:25:50.648629 | orchestrator | Tuesday 10 March 2026 00:25:46 +0000 (0:00:01.345) 0:00:36.467 ********* 2026-03-10 00:25:50.648639 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:25:50.648648 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:25:50.648658 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:25:50.648667 | orchestrator | ok: [testbed-manager] 2026-03-10 00:25:50.648677 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:25:50.648686 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:25:50.648695 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:25:50.648705 | orchestrator | 2026-03-10 00:25:50.648714 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:25:50.648724 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:25:50.648734 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:25:50.648745 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:25:50.648754 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:25:50.648764 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:25:50.648774 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:25:50.648783 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:25:50.648793 | orchestrator | 2026-03-10 00:25:50.648802 | orchestrator | 2026-03-10 00:25:50.648812 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:25:50.648821 | orchestrator | Tuesday 10 March 2026 00:25:50 +0000 (0:00:03.642) 0:00:40.110 ********* 2026-03-10 00:25:50.648831 | orchestrator | =============================================================================== 2026-03-10 00:25:50.648841 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.44s 2026-03-10 00:25:50.648850 | orchestrator | Install required packages (Debian) -------------------------------------- 7.67s 2026-03-10 00:25:50.648881 | orchestrator | Copy fact files --------------------------------------------------------- 3.64s 2026-03-10 00:25:50.648891 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.64s 2026-03-10 00:25:50.648901 | orchestrator | Create custom facts directory ------------------------------------------- 1.38s 2026-03-10 00:25:50.648910 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.35s 2026-03-10 00:25:50.648925 | orchestrator | Copy fact file ---------------------------------------------------------- 1.26s 2026-03-10 00:25:50.887323 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.23s 2026-03-10 00:25:50.887409 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.06s 2026-03-10 00:25:50.887436 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.49s 2026-03-10 00:25:50.887442 | orchestrator | Create custom facts directory ------------------------------------------- 0.46s 2026-03-10 00:25:50.887467 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.45s 2026-03-10 00:25:50.887473 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.24s 2026-03-10 00:25:50.887479 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.22s 2026-03-10 00:25:50.887485 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2026-03-10 00:25:50.887492 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.14s 2026-03-10 00:25:50.887498 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2026-03-10 00:25:50.887504 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2026-03-10 00:25:51.194693 | orchestrator | + osism apply bootstrap 2026-03-10 00:26:03.380015 | orchestrator | 2026-03-10 00:26:03 | INFO  | Task c6fe6865-017c-4f66-89c6-820965a2fd47 (bootstrap) was prepared for execution. 2026-03-10 00:26:03.380137 | orchestrator | 2026-03-10 00:26:03 | INFO  | It takes a moment until task c6fe6865-017c-4f66-89c6-820965a2fd47 (bootstrap) has been started and output is visible here. 2026-03-10 00:26:19.703406 | orchestrator | 2026-03-10 00:26:19.703537 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-10 00:26:19.703556 | orchestrator | 2026-03-10 00:26:19.703568 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-10 00:26:19.703579 | orchestrator | Tuesday 10 March 2026 00:26:07 +0000 (0:00:00.170) 0:00:00.170 ********* 2026-03-10 00:26:19.703590 | orchestrator | ok: [testbed-manager] 2026-03-10 00:26:19.703602 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:26:19.703613 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:26:19.703624 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:26:19.703634 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:26:19.703645 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:26:19.703655 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:26:19.703666 | orchestrator | 2026-03-10 00:26:19.703677 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-10 00:26:19.703688 | orchestrator | 2026-03-10 00:26:19.703699 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-10 00:26:19.703710 | orchestrator | Tuesday 10 March 2026 00:26:07 +0000 (0:00:00.242) 0:00:00.412 ********* 2026-03-10 00:26:19.703720 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:26:19.703731 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:26:19.703741 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:26:19.703752 | orchestrator | ok: [testbed-manager] 2026-03-10 00:26:19.703763 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:26:19.703773 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:26:19.703783 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:26:19.703794 | orchestrator | 2026-03-10 00:26:19.703805 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-10 00:26:19.703815 | orchestrator | 2026-03-10 00:26:19.703826 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-10 00:26:19.703836 | orchestrator | Tuesday 10 March 2026 00:26:11 +0000 (0:00:03.695) 0:00:04.108 ********* 2026-03-10 00:26:19.703848 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-10 00:26:19.703893 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-10 00:26:19.703905 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-10 00:26:19.703918 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-10 00:26:19.703929 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-10 00:26:19.703942 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-10 00:26:19.703954 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-10 00:26:19.703966 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-10 00:26:19.703978 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-10 00:26:19.704015 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-10 00:26:19.704028 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-10 00:26:19.704040 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-10 00:26:19.704051 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-10 00:26:19.704063 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-10 00:26:19.704076 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-10 00:26:19.704089 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-10 00:26:19.704100 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-10 00:26:19.704113 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-10 00:26:19.704125 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-10 00:26:19.704136 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-10 00:26:19.704148 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-10 00:26:19.704161 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:26:19.704172 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:26:19.704184 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-10 00:26:19.704196 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-10 00:26:19.704208 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-10 00:26:19.704220 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-10 00:26:19.704232 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-10 00:26:19.704244 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-10 00:26:19.704256 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-10 00:26:19.704269 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-10 00:26:19.704279 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-10 00:26:19.704289 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-10 00:26:19.704300 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-10 00:26:19.704310 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-10 00:26:19.704320 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-10 00:26:19.704330 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:26:19.704341 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:26:19.704351 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-10 00:26:19.704361 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-10 00:26:19.704372 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-10 00:26:19.704382 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-10 00:26:19.704392 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-10 00:26:19.704403 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-10 00:26:19.704413 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-10 00:26:19.704424 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-10 00:26:19.704453 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-10 00:26:19.704464 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:26:19.704475 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-10 00:26:19.704485 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-10 00:26:19.704513 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-10 00:26:19.704524 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:26:19.704535 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-10 00:26:19.704545 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-10 00:26:19.704565 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-10 00:26:19.704576 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:26:19.704586 | orchestrator | 2026-03-10 00:26:19.704597 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-10 00:26:19.704608 | orchestrator | 2026-03-10 00:26:19.704618 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-10 00:26:19.704629 | orchestrator | Tuesday 10 March 2026 00:26:12 +0000 (0:00:00.488) 0:00:04.597 ********* 2026-03-10 00:26:19.704639 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:26:19.704650 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:26:19.704661 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:26:19.704671 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:26:19.704682 | orchestrator | ok: [testbed-manager] 2026-03-10 00:26:19.704692 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:26:19.704703 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:26:19.704713 | orchestrator | 2026-03-10 00:26:19.704724 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-10 00:26:19.704734 | orchestrator | Tuesday 10 March 2026 00:26:13 +0000 (0:00:01.251) 0:00:05.848 ********* 2026-03-10 00:26:19.704745 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:26:19.704755 | orchestrator | ok: [testbed-manager] 2026-03-10 00:26:19.704766 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:26:19.704776 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:26:19.704786 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:26:19.704796 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:26:19.704807 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:26:19.704817 | orchestrator | 2026-03-10 00:26:19.704827 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-10 00:26:19.704838 | orchestrator | Tuesday 10 March 2026 00:26:14 +0000 (0:00:01.278) 0:00:07.126 ********* 2026-03-10 00:26:19.704850 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:26:19.704889 | orchestrator | 2026-03-10 00:26:19.704899 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-10 00:26:19.704910 | orchestrator | Tuesday 10 March 2026 00:26:14 +0000 (0:00:00.292) 0:00:07.419 ********* 2026-03-10 00:26:19.704920 | orchestrator | changed: [testbed-manager] 2026-03-10 00:26:19.704931 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:26:19.704941 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:26:19.704952 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:26:19.704962 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:26:19.704973 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:26:19.704983 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:26:19.704993 | orchestrator | 2026-03-10 00:26:19.705004 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-10 00:26:19.705014 | orchestrator | Tuesday 10 March 2026 00:26:17 +0000 (0:00:02.116) 0:00:09.535 ********* 2026-03-10 00:26:19.705025 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:26:19.705037 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:26:19.705049 | orchestrator | 2026-03-10 00:26:19.705060 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-10 00:26:19.705071 | orchestrator | Tuesday 10 March 2026 00:26:17 +0000 (0:00:00.259) 0:00:09.795 ********* 2026-03-10 00:26:19.705081 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:26:19.705092 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:26:19.705102 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:26:19.705113 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:26:19.705123 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:26:19.705134 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:26:19.705151 | orchestrator | 2026-03-10 00:26:19.705168 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-10 00:26:19.705179 | orchestrator | Tuesday 10 March 2026 00:26:18 +0000 (0:00:01.132) 0:00:10.927 ********* 2026-03-10 00:26:19.705189 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:26:19.705200 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:26:19.705210 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:26:19.705221 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:26:19.705231 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:26:19.705242 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:26:19.705252 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:26:19.705263 | orchestrator | 2026-03-10 00:26:19.705273 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-10 00:26:19.705284 | orchestrator | Tuesday 10 March 2026 00:26:19 +0000 (0:00:00.645) 0:00:11.573 ********* 2026-03-10 00:26:19.705294 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:26:19.705305 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:26:19.705315 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:26:19.705325 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:26:19.705336 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:26:19.705346 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:26:19.705357 | orchestrator | ok: [testbed-manager] 2026-03-10 00:26:19.705367 | orchestrator | 2026-03-10 00:26:19.705378 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-10 00:26:19.705389 | orchestrator | Tuesday 10 March 2026 00:26:19 +0000 (0:00:00.430) 0:00:12.003 ********* 2026-03-10 00:26:19.705400 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:26:19.705411 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:26:19.705429 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:26:31.913170 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:26:31.913283 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:26:31.913298 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:26:31.913309 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:26:31.913320 | orchestrator | 2026-03-10 00:26:31.913333 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-10 00:26:31.913345 | orchestrator | Tuesday 10 March 2026 00:26:19 +0000 (0:00:00.217) 0:00:12.220 ********* 2026-03-10 00:26:31.913358 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:26:31.913386 | orchestrator | 2026-03-10 00:26:31.913398 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-10 00:26:31.913410 | orchestrator | Tuesday 10 March 2026 00:26:20 +0000 (0:00:00.297) 0:00:12.518 ********* 2026-03-10 00:26:31.913421 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:26:31.913431 | orchestrator | 2026-03-10 00:26:31.913442 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-10 00:26:31.913453 | orchestrator | Tuesday 10 March 2026 00:26:20 +0000 (0:00:00.314) 0:00:12.833 ********* 2026-03-10 00:26:31.913463 | orchestrator | ok: [testbed-manager] 2026-03-10 00:26:31.913475 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:26:31.913485 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:26:31.913496 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:26:31.913507 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:26:31.913518 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:26:31.913528 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:26:31.913539 | orchestrator | 2026-03-10 00:26:31.913549 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-10 00:26:31.913560 | orchestrator | Tuesday 10 March 2026 00:26:21 +0000 (0:00:01.372) 0:00:14.205 ********* 2026-03-10 00:26:31.913597 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:26:31.913608 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:26:31.913618 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:26:31.913629 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:26:31.913639 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:26:31.913649 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:26:31.913660 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:26:31.913670 | orchestrator | 2026-03-10 00:26:31.913681 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-10 00:26:31.913691 | orchestrator | Tuesday 10 March 2026 00:26:21 +0000 (0:00:00.223) 0:00:14.428 ********* 2026-03-10 00:26:31.913702 | orchestrator | ok: [testbed-manager] 2026-03-10 00:26:31.913712 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:26:31.913722 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:26:31.913733 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:26:31.913743 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:26:31.913753 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:26:31.913764 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:26:31.913774 | orchestrator | 2026-03-10 00:26:31.913785 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-10 00:26:31.913795 | orchestrator | Tuesday 10 March 2026 00:26:22 +0000 (0:00:00.569) 0:00:14.998 ********* 2026-03-10 00:26:31.913805 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:26:31.913816 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:26:31.913826 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:26:31.913837 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:26:31.913878 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:26:31.913892 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:26:31.913903 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:26:31.913914 | orchestrator | 2026-03-10 00:26:31.913925 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-10 00:26:31.913937 | orchestrator | Tuesday 10 March 2026 00:26:22 +0000 (0:00:00.340) 0:00:15.338 ********* 2026-03-10 00:26:31.913947 | orchestrator | ok: [testbed-manager] 2026-03-10 00:26:31.913957 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:26:31.913968 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:26:31.913978 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:26:31.913988 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:26:31.913998 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:26:31.914101 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:26:31.914116 | orchestrator | 2026-03-10 00:26:31.914130 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-10 00:26:31.914149 | orchestrator | Tuesday 10 March 2026 00:26:23 +0000 (0:00:00.563) 0:00:15.902 ********* 2026-03-10 00:26:31.914176 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:26:31.914197 | orchestrator | ok: [testbed-manager] 2026-03-10 00:26:31.914217 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:26:31.914237 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:26:31.914255 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:26:31.914272 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:26:31.914292 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:26:31.914310 | orchestrator | 2026-03-10 00:26:31.914329 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-10 00:26:31.914351 | orchestrator | Tuesday 10 March 2026 00:26:24 +0000 (0:00:01.130) 0:00:17.033 ********* 2026-03-10 00:26:31.914371 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:26:31.914390 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:26:31.914407 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:26:31.914418 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:26:31.914429 | orchestrator | ok: [testbed-manager] 2026-03-10 00:26:31.914439 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:26:31.914450 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:26:31.914460 | orchestrator | 2026-03-10 00:26:31.914471 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-10 00:26:31.914495 | orchestrator | Tuesday 10 March 2026 00:26:25 +0000 (0:00:01.032) 0:00:18.066 ********* 2026-03-10 00:26:31.914527 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:26:31.914539 | orchestrator | 2026-03-10 00:26:31.914550 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-10 00:26:31.914560 | orchestrator | Tuesday 10 March 2026 00:26:25 +0000 (0:00:00.290) 0:00:18.356 ********* 2026-03-10 00:26:31.914571 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:26:31.914581 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:26:31.914592 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:26:31.914602 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:26:31.914612 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:26:31.914623 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:26:31.914633 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:26:31.914643 | orchestrator | 2026-03-10 00:26:31.914654 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-10 00:26:31.914665 | orchestrator | Tuesday 10 March 2026 00:26:27 +0000 (0:00:01.295) 0:00:19.652 ********* 2026-03-10 00:26:31.914675 | orchestrator | ok: [testbed-manager] 2026-03-10 00:26:31.914686 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:26:31.914696 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:26:31.914707 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:26:31.914717 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:26:31.914728 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:26:31.914738 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:26:31.914748 | orchestrator | 2026-03-10 00:26:31.914759 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-10 00:26:31.914770 | orchestrator | Tuesday 10 March 2026 00:26:27 +0000 (0:00:00.244) 0:00:19.896 ********* 2026-03-10 00:26:31.914805 | orchestrator | ok: [testbed-manager] 2026-03-10 00:26:31.914816 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:26:31.914827 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:26:31.914837 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:26:31.914902 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:26:31.914915 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:26:31.914926 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:26:31.914936 | orchestrator | 2026-03-10 00:26:31.914947 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-10 00:26:31.914958 | orchestrator | Tuesday 10 March 2026 00:26:27 +0000 (0:00:00.246) 0:00:20.143 ********* 2026-03-10 00:26:31.914968 | orchestrator | ok: [testbed-manager] 2026-03-10 00:26:31.914979 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:26:31.914989 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:26:31.914999 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:26:31.915009 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:26:31.915020 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:26:31.915030 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:26:31.915040 | orchestrator | 2026-03-10 00:26:31.915051 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-10 00:26:31.915062 | orchestrator | Tuesday 10 March 2026 00:26:27 +0000 (0:00:00.249) 0:00:20.393 ********* 2026-03-10 00:26:31.915073 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:26:31.915085 | orchestrator | 2026-03-10 00:26:31.915096 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-10 00:26:31.915107 | orchestrator | Tuesday 10 March 2026 00:26:28 +0000 (0:00:00.344) 0:00:20.737 ********* 2026-03-10 00:26:31.915117 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:26:31.915128 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:26:31.915147 | orchestrator | ok: [testbed-manager] 2026-03-10 00:26:31.915157 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:26:31.915168 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:26:31.915178 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:26:31.915193 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:26:31.915211 | orchestrator | 2026-03-10 00:26:31.915230 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-10 00:26:31.915250 | orchestrator | Tuesday 10 March 2026 00:26:28 +0000 (0:00:00.596) 0:00:21.334 ********* 2026-03-10 00:26:31.915268 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:26:31.915280 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:26:31.915290 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:26:31.915301 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:26:31.915311 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:26:31.915322 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:26:31.915338 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:26:31.915356 | orchestrator | 2026-03-10 00:26:31.915375 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-10 00:26:31.915392 | orchestrator | Tuesday 10 March 2026 00:26:29 +0000 (0:00:00.236) 0:00:21.570 ********* 2026-03-10 00:26:31.915410 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:26:31.915429 | orchestrator | ok: [testbed-manager] 2026-03-10 00:26:31.915446 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:26:31.915462 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:26:31.915480 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:26:31.915496 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:26:31.915514 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:26:31.915532 | orchestrator | 2026-03-10 00:26:31.915550 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-10 00:26:31.915569 | orchestrator | Tuesday 10 March 2026 00:26:30 +0000 (0:00:01.057) 0:00:22.628 ********* 2026-03-10 00:26:31.915589 | orchestrator | ok: [testbed-manager] 2026-03-10 00:26:31.915606 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:26:31.915625 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:26:31.915644 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:26:31.915662 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:26:31.915678 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:26:31.915701 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:26:31.915712 | orchestrator | 2026-03-10 00:26:31.915723 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-10 00:26:31.915734 | orchestrator | Tuesday 10 March 2026 00:26:30 +0000 (0:00:00.568) 0:00:23.196 ********* 2026-03-10 00:26:31.915745 | orchestrator | ok: [testbed-manager] 2026-03-10 00:26:31.915755 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:26:31.915765 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:26:31.915776 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:26:31.915798 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:27:15.411402 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:27:15.411555 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:27:15.411571 | orchestrator | 2026-03-10 00:27:15.411584 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-10 00:27:15.411598 | orchestrator | Tuesday 10 March 2026 00:26:31 +0000 (0:00:01.149) 0:00:24.345 ********* 2026-03-10 00:27:15.411609 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:27:15.411621 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:27:15.411632 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:27:15.411643 | orchestrator | changed: [testbed-manager] 2026-03-10 00:27:15.411655 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:27:15.411666 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:27:15.411677 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:27:15.411688 | orchestrator | 2026-03-10 00:27:15.411699 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-10 00:27:15.411710 | orchestrator | Tuesday 10 March 2026 00:26:49 +0000 (0:00:17.543) 0:00:41.889 ********* 2026-03-10 00:27:15.411721 | orchestrator | ok: [testbed-manager] 2026-03-10 00:27:15.411760 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:27:15.411772 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:27:15.411782 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:27:15.411793 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:27:15.411803 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:27:15.411814 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:27:15.411824 | orchestrator | 2026-03-10 00:27:15.411858 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-10 00:27:15.411870 | orchestrator | Tuesday 10 March 2026 00:26:49 +0000 (0:00:00.240) 0:00:42.129 ********* 2026-03-10 00:27:15.411881 | orchestrator | ok: [testbed-manager] 2026-03-10 00:27:15.411892 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:27:15.411902 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:27:15.411913 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:27:15.411923 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:27:15.411934 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:27:15.411944 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:27:15.411955 | orchestrator | 2026-03-10 00:27:15.411965 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-10 00:27:15.411976 | orchestrator | Tuesday 10 March 2026 00:26:49 +0000 (0:00:00.205) 0:00:42.334 ********* 2026-03-10 00:27:15.411987 | orchestrator | ok: [testbed-manager] 2026-03-10 00:27:15.411997 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:27:15.412008 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:27:15.412018 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:27:15.412029 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:27:15.412039 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:27:15.412050 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:27:15.412061 | orchestrator | 2026-03-10 00:27:15.412072 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-10 00:27:15.412082 | orchestrator | Tuesday 10 March 2026 00:26:50 +0000 (0:00:00.226) 0:00:42.560 ********* 2026-03-10 00:27:15.412095 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:27:15.412110 | orchestrator | 2026-03-10 00:27:15.412121 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-10 00:27:15.412131 | orchestrator | Tuesday 10 March 2026 00:26:50 +0000 (0:00:00.296) 0:00:42.857 ********* 2026-03-10 00:27:15.412142 | orchestrator | ok: [testbed-manager] 2026-03-10 00:27:15.412153 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:27:15.412163 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:27:15.412174 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:27:15.412184 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:27:15.412195 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:27:15.412206 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:27:15.412216 | orchestrator | 2026-03-10 00:27:15.412227 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-10 00:27:15.412238 | orchestrator | Tuesday 10 March 2026 00:26:52 +0000 (0:00:01.763) 0:00:44.620 ********* 2026-03-10 00:27:15.412249 | orchestrator | changed: [testbed-manager] 2026-03-10 00:27:15.412260 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:27:15.412270 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:27:15.412281 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:27:15.412292 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:27:15.412302 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:27:15.412313 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:27:15.412323 | orchestrator | 2026-03-10 00:27:15.412334 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-10 00:27:15.412364 | orchestrator | Tuesday 10 March 2026 00:26:53 +0000 (0:00:01.376) 0:00:45.997 ********* 2026-03-10 00:27:15.412375 | orchestrator | ok: [testbed-manager] 2026-03-10 00:27:15.412386 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:27:15.412397 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:27:15.412415 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:27:15.412426 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:27:15.412436 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:27:15.412447 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:27:15.412458 | orchestrator | 2026-03-10 00:27:15.412468 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-10 00:27:15.412479 | orchestrator | Tuesday 10 March 2026 00:26:54 +0000 (0:00:00.803) 0:00:46.800 ********* 2026-03-10 00:27:15.412491 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:27:15.412505 | orchestrator | 2026-03-10 00:27:15.412516 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-10 00:27:15.412527 | orchestrator | Tuesday 10 March 2026 00:26:54 +0000 (0:00:00.313) 0:00:47.114 ********* 2026-03-10 00:27:15.412537 | orchestrator | changed: [testbed-manager] 2026-03-10 00:27:15.412548 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:27:15.412559 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:27:15.412570 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:27:15.412580 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:27:15.412591 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:27:15.412602 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:27:15.412612 | orchestrator | 2026-03-10 00:27:15.412644 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-10 00:27:15.412656 | orchestrator | Tuesday 10 March 2026 00:26:55 +0000 (0:00:01.060) 0:00:48.174 ********* 2026-03-10 00:27:15.412667 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:27:15.412677 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:27:15.412688 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:27:15.412699 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:27:15.412709 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:27:15.412720 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:27:15.412730 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:27:15.412741 | orchestrator | 2026-03-10 00:27:15.412752 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-10 00:27:15.412763 | orchestrator | Tuesday 10 March 2026 00:26:55 +0000 (0:00:00.231) 0:00:48.406 ********* 2026-03-10 00:27:15.412774 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:27:15.412785 | orchestrator | 2026-03-10 00:27:15.412795 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-10 00:27:15.412806 | orchestrator | Tuesday 10 March 2026 00:26:56 +0000 (0:00:00.325) 0:00:48.731 ********* 2026-03-10 00:27:15.412817 | orchestrator | ok: [testbed-manager] 2026-03-10 00:27:15.412827 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:27:15.412854 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:27:15.412865 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:27:15.412876 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:27:15.412887 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:27:15.412897 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:27:15.412908 | orchestrator | 2026-03-10 00:27:15.412919 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-10 00:27:15.412930 | orchestrator | Tuesday 10 March 2026 00:26:58 +0000 (0:00:02.116) 0:00:50.848 ********* 2026-03-10 00:27:15.412941 | orchestrator | changed: [testbed-manager] 2026-03-10 00:27:15.412952 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:27:15.412963 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:27:15.412973 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:27:15.412984 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:27:15.412995 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:27:15.413005 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:27:15.413024 | orchestrator | 2026-03-10 00:27:15.413035 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-10 00:27:15.413046 | orchestrator | Tuesday 10 March 2026 00:26:59 +0000 (0:00:01.226) 0:00:52.075 ********* 2026-03-10 00:27:15.413057 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:27:15.413068 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:27:15.413078 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:27:15.413089 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:27:15.413100 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:27:15.413111 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:27:15.413122 | orchestrator | changed: [testbed-manager] 2026-03-10 00:27:15.413132 | orchestrator | 2026-03-10 00:27:15.413143 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-10 00:27:15.413154 | orchestrator | Tuesday 10 March 2026 00:27:12 +0000 (0:00:13.144) 0:01:05.220 ********* 2026-03-10 00:27:15.413165 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:27:15.413176 | orchestrator | ok: [testbed-manager] 2026-03-10 00:27:15.413187 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:27:15.413198 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:27:15.413208 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:27:15.413219 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:27:15.413230 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:27:15.413241 | orchestrator | 2026-03-10 00:27:15.413252 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-10 00:27:15.413263 | orchestrator | Tuesday 10 March 2026 00:27:13 +0000 (0:00:00.920) 0:01:06.141 ********* 2026-03-10 00:27:15.413274 | orchestrator | ok: [testbed-manager] 2026-03-10 00:27:15.413285 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:27:15.413295 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:27:15.413306 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:27:15.413317 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:27:15.413327 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:27:15.413338 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:27:15.413349 | orchestrator | 2026-03-10 00:27:15.413360 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-10 00:27:15.413371 | orchestrator | Tuesday 10 March 2026 00:27:14 +0000 (0:00:00.941) 0:01:07.082 ********* 2026-03-10 00:27:15.413388 | orchestrator | ok: [testbed-manager] 2026-03-10 00:27:15.413399 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:27:15.413409 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:27:15.413420 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:27:15.413431 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:27:15.413442 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:27:15.413453 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:27:15.413463 | orchestrator | 2026-03-10 00:27:15.413474 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-10 00:27:15.413485 | orchestrator | Tuesday 10 March 2026 00:27:14 +0000 (0:00:00.260) 0:01:07.342 ********* 2026-03-10 00:27:15.413496 | orchestrator | ok: [testbed-manager] 2026-03-10 00:27:15.413507 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:27:15.413518 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:27:15.413528 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:27:15.413539 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:27:15.413550 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:27:15.413561 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:27:15.413571 | orchestrator | 2026-03-10 00:27:15.413582 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-10 00:27:15.413593 | orchestrator | Tuesday 10 March 2026 00:27:15 +0000 (0:00:00.226) 0:01:07.569 ********* 2026-03-10 00:27:15.413605 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:27:15.413616 | orchestrator | 2026-03-10 00:27:15.413634 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-10 00:29:46.682628 | orchestrator | Tuesday 10 March 2026 00:27:15 +0000 (0:00:00.278) 0:01:07.847 ********* 2026-03-10 00:29:46.682752 | orchestrator | ok: [testbed-manager] 2026-03-10 00:29:46.682772 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:29:46.682815 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:29:46.682825 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:29:46.682834 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:29:46.682842 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:29:46.682851 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:29:46.682863 | orchestrator | 2026-03-10 00:29:46.682876 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-10 00:29:46.682886 | orchestrator | Tuesday 10 March 2026 00:27:17 +0000 (0:00:01.600) 0:01:09.448 ********* 2026-03-10 00:29:46.682895 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:29:46.682904 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:29:46.682913 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:29:46.682921 | orchestrator | changed: [testbed-manager] 2026-03-10 00:29:46.682930 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:29:46.682938 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:29:46.682947 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:29:46.682956 | orchestrator | 2026-03-10 00:29:46.682967 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-10 00:29:46.682983 | orchestrator | Tuesday 10 March 2026 00:27:17 +0000 (0:00:00.566) 0:01:10.014 ********* 2026-03-10 00:29:46.682998 | orchestrator | ok: [testbed-manager] 2026-03-10 00:29:46.683011 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:29:46.683026 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:29:46.683040 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:29:46.683054 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:29:46.683070 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:29:46.683084 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:29:46.683098 | orchestrator | 2026-03-10 00:29:46.683114 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-10 00:29:46.683123 | orchestrator | Tuesday 10 March 2026 00:27:17 +0000 (0:00:00.280) 0:01:10.295 ********* 2026-03-10 00:29:46.683131 | orchestrator | ok: [testbed-manager] 2026-03-10 00:29:46.683140 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:29:46.683148 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:29:46.683156 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:29:46.683166 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:29:46.683175 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:29:46.683184 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:29:46.683194 | orchestrator | 2026-03-10 00:29:46.683204 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-10 00:29:46.683214 | orchestrator | Tuesday 10 March 2026 00:27:19 +0000 (0:00:01.180) 0:01:11.475 ********* 2026-03-10 00:29:46.683224 | orchestrator | changed: [testbed-manager] 2026-03-10 00:29:46.683234 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:29:46.683243 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:29:46.683253 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:29:46.683262 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:29:46.683272 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:29:46.683281 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:29:46.683291 | orchestrator | 2026-03-10 00:29:46.683305 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-10 00:29:46.683315 | orchestrator | Tuesday 10 March 2026 00:27:20 +0000 (0:00:01.683) 0:01:13.159 ********* 2026-03-10 00:29:46.683325 | orchestrator | ok: [testbed-manager] 2026-03-10 00:29:46.683334 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:29:46.683344 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:29:46.683353 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:29:46.683363 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:29:46.683372 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:29:46.683382 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:29:46.683392 | orchestrator | 2026-03-10 00:29:46.683402 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-10 00:29:46.683434 | orchestrator | Tuesday 10 March 2026 00:27:23 +0000 (0:00:02.374) 0:01:15.533 ********* 2026-03-10 00:29:46.683444 | orchestrator | ok: [testbed-manager] 2026-03-10 00:29:46.683452 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:29:46.683460 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:29:46.683469 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:29:46.683477 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:29:46.683486 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:29:46.683494 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:29:46.683502 | orchestrator | 2026-03-10 00:29:46.683511 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-10 00:29:46.683519 | orchestrator | Tuesday 10 March 2026 00:28:04 +0000 (0:00:41.191) 0:01:56.725 ********* 2026-03-10 00:29:46.683528 | orchestrator | changed: [testbed-manager] 2026-03-10 00:29:46.683536 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:29:46.683545 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:29:46.683554 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:29:46.683562 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:29:46.683571 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:29:46.683579 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:29:46.683588 | orchestrator | 2026-03-10 00:29:46.683596 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-10 00:29:46.683605 | orchestrator | Tuesday 10 March 2026 00:29:30 +0000 (0:01:25.991) 0:03:22.716 ********* 2026-03-10 00:29:46.683614 | orchestrator | ok: [testbed-manager] 2026-03-10 00:29:46.683622 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:29:46.683631 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:29:46.683639 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:29:46.683647 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:29:46.683656 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:29:46.683664 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:29:46.683672 | orchestrator | 2026-03-10 00:29:46.683681 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-10 00:29:46.683689 | orchestrator | Tuesday 10 March 2026 00:29:32 +0000 (0:00:01.819) 0:03:24.536 ********* 2026-03-10 00:29:46.683698 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:29:46.683706 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:29:46.683715 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:29:46.683723 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:29:46.683731 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:29:46.683739 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:29:46.683748 | orchestrator | changed: [testbed-manager] 2026-03-10 00:29:46.683757 | orchestrator | 2026-03-10 00:29:46.683765 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-10 00:29:46.683774 | orchestrator | Tuesday 10 March 2026 00:29:45 +0000 (0:00:13.329) 0:03:37.866 ********* 2026-03-10 00:29:46.683840 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-10 00:29:46.683871 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-10 00:29:46.683892 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-10 00:29:46.683903 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-10 00:29:46.683912 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-10 00:29:46.683920 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-10 00:29:46.683929 | orchestrator | 2026-03-10 00:29:46.683938 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-10 00:29:46.683947 | orchestrator | Tuesday 10 March 2026 00:29:45 +0000 (0:00:00.413) 0:03:38.280 ********* 2026-03-10 00:29:46.683955 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-10 00:29:46.683966 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:29:46.683981 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-10 00:29:46.683995 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:29:46.684010 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-10 00:29:46.684031 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-10 00:29:46.684046 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:29:46.684060 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:29:46.684074 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-10 00:29:46.684083 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-10 00:29:46.684092 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-10 00:29:46.684100 | orchestrator | 2026-03-10 00:29:46.684109 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-10 00:29:46.684117 | orchestrator | Tuesday 10 March 2026 00:29:46 +0000 (0:00:00.725) 0:03:39.006 ********* 2026-03-10 00:29:46.684126 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-10 00:29:46.684135 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-10 00:29:46.684144 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-10 00:29:46.684152 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-10 00:29:46.684161 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-10 00:29:46.684176 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-10 00:29:53.425105 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-10 00:29:53.425211 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-10 00:29:53.425247 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-10 00:29:53.425259 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-10 00:29:53.425269 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-10 00:29:53.425282 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-10 00:29:53.425298 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-10 00:29:53.425308 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-10 00:29:53.425318 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-10 00:29:53.425328 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-10 00:29:53.425338 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-10 00:29:53.425347 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-10 00:29:53.425357 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-10 00:29:53.425366 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-10 00:29:53.425376 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-10 00:29:53.425385 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-10 00:29:53.425394 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-10 00:29:53.425404 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:29:53.425415 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-10 00:29:53.425424 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-10 00:29:53.425433 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-10 00:29:53.425443 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-10 00:29:53.425452 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-10 00:29:53.425462 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-10 00:29:53.425471 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-10 00:29:53.425480 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-10 00:29:53.425490 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-10 00:29:53.425499 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-10 00:29:53.425508 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-10 00:29:53.425518 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-10 00:29:53.425549 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-10 00:29:53.425560 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:29:53.425570 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-10 00:29:53.425579 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-10 00:29:53.425588 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-10 00:29:53.425606 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-10 00:29:53.425616 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:29:53.425625 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:29:53.425635 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-10 00:29:53.425646 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-10 00:29:53.425656 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-10 00:29:53.425668 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-10 00:29:53.425679 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-10 00:29:53.425705 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-10 00:29:53.425717 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-10 00:29:53.425728 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-10 00:29:53.425741 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-10 00:29:53.425758 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-10 00:29:53.425769 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-10 00:29:53.425809 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-10 00:29:53.425820 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-10 00:29:53.425832 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-10 00:29:53.425842 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-10 00:29:53.425853 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-10 00:29:53.425865 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-10 00:29:53.425882 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-10 00:29:53.425894 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-10 00:29:53.425905 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-10 00:29:53.425916 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-10 00:29:53.425927 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-10 00:29:53.425937 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-10 00:29:53.425948 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-10 00:29:53.425959 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-10 00:29:53.425969 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-10 00:29:53.425979 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-10 00:29:53.425990 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-10 00:29:53.426002 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-10 00:29:53.426013 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-10 00:29:53.426116 | orchestrator | 2026-03-10 00:29:53.426135 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-10 00:29:53.426152 | orchestrator | Tuesday 10 March 2026 00:29:51 +0000 (0:00:04.725) 0:03:43.731 ********* 2026-03-10 00:29:53.426167 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-10 00:29:53.426181 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-10 00:29:53.426195 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-10 00:29:53.426209 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-10 00:29:53.426226 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-10 00:29:53.426252 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-10 00:29:53.426267 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-10 00:29:53.426282 | orchestrator | 2026-03-10 00:29:53.426298 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-10 00:29:53.426315 | orchestrator | Tuesday 10 March 2026 00:29:52 +0000 (0:00:01.525) 0:03:45.256 ********* 2026-03-10 00:29:53.426330 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-10 00:29:53.426347 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:29:53.426363 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-10 00:29:53.426379 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-10 00:29:53.426395 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:29:53.426412 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:29:53.426428 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-10 00:29:53.426443 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:29:53.426453 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-10 00:29:53.426463 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-10 00:29:53.426484 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-10 00:30:06.874910 | orchestrator | 2026-03-10 00:30:06.874995 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-10 00:30:06.875004 | orchestrator | Tuesday 10 March 2026 00:29:53 +0000 (0:00:00.603) 0:03:45.860 ********* 2026-03-10 00:30:06.875009 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-10 00:30:06.875015 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-10 00:30:06.875021 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:30:06.875027 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:30:06.875032 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-10 00:30:06.875037 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-10 00:30:06.875042 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:30:06.875047 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:30:06.875052 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-10 00:30:06.875057 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-10 00:30:06.875061 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-10 00:30:06.875066 | orchestrator | 2026-03-10 00:30:06.875071 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-10 00:30:06.875093 | orchestrator | Tuesday 10 March 2026 00:29:54 +0000 (0:00:00.620) 0:03:46.481 ********* 2026-03-10 00:30:06.875100 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-10 00:30:06.875105 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:30:06.875110 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-10 00:30:06.875115 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-10 00:30:06.875119 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:30:06.875124 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:30:06.875129 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-10 00:30:06.875133 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:30:06.875138 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-10 00:30:06.875143 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-10 00:30:06.875148 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-10 00:30:06.875153 | orchestrator | 2026-03-10 00:30:06.875160 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-10 00:30:06.875168 | orchestrator | Tuesday 10 March 2026 00:29:54 +0000 (0:00:00.586) 0:03:47.067 ********* 2026-03-10 00:30:06.875173 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:30:06.875178 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:30:06.875182 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:30:06.875187 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:30:06.875192 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:30:06.875196 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:30:06.875201 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:30:06.875206 | orchestrator | 2026-03-10 00:30:06.875210 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-10 00:30:06.875215 | orchestrator | Tuesday 10 March 2026 00:29:54 +0000 (0:00:00.326) 0:03:47.394 ********* 2026-03-10 00:30:06.875220 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:30:06.875225 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:30:06.875230 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:30:06.875235 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:30:06.875240 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:30:06.875244 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:30:06.875249 | orchestrator | ok: [testbed-manager] 2026-03-10 00:30:06.875253 | orchestrator | 2026-03-10 00:30:06.875258 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-10 00:30:06.875263 | orchestrator | Tuesday 10 March 2026 00:30:00 +0000 (0:00:05.600) 0:03:52.994 ********* 2026-03-10 00:30:06.875268 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-10 00:30:06.875273 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-10 00:30:06.875278 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:30:06.875282 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-10 00:30:06.875287 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:30:06.875291 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-10 00:30:06.875296 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:30:06.875301 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-10 00:30:06.875306 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:30:06.875310 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-10 00:30:06.875328 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:30:06.875333 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:30:06.875338 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-10 00:30:06.875342 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:30:06.875347 | orchestrator | 2026-03-10 00:30:06.875356 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-10 00:30:06.875361 | orchestrator | Tuesday 10 March 2026 00:30:00 +0000 (0:00:00.306) 0:03:53.300 ********* 2026-03-10 00:30:06.875366 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-10 00:30:06.875371 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-10 00:30:06.875376 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-10 00:30:06.875391 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-10 00:30:06.875396 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-10 00:30:06.875401 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-10 00:30:06.875405 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-10 00:30:06.875410 | orchestrator | 2026-03-10 00:30:06.875414 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-10 00:30:06.875419 | orchestrator | Tuesday 10 March 2026 00:30:01 +0000 (0:00:01.050) 0:03:54.351 ********* 2026-03-10 00:30:06.875426 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:30:06.875434 | orchestrator | 2026-03-10 00:30:06.875438 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-10 00:30:06.875443 | orchestrator | Tuesday 10 March 2026 00:30:02 +0000 (0:00:00.506) 0:03:54.858 ********* 2026-03-10 00:30:06.875448 | orchestrator | ok: [testbed-manager] 2026-03-10 00:30:06.875452 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:30:06.875457 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:30:06.875462 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:30:06.875467 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:30:06.875472 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:30:06.875478 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:30:06.875483 | orchestrator | 2026-03-10 00:30:06.875488 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-10 00:30:06.875494 | orchestrator | Tuesday 10 March 2026 00:30:03 +0000 (0:00:01.355) 0:03:56.213 ********* 2026-03-10 00:30:06.875500 | orchestrator | ok: [testbed-manager] 2026-03-10 00:30:06.875505 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:30:06.875510 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:30:06.875516 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:30:06.875521 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:30:06.875526 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:30:06.875532 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:30:06.875537 | orchestrator | 2026-03-10 00:30:06.875543 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-10 00:30:06.875548 | orchestrator | Tuesday 10 March 2026 00:30:04 +0000 (0:00:00.636) 0:03:56.849 ********* 2026-03-10 00:30:06.875554 | orchestrator | changed: [testbed-manager] 2026-03-10 00:30:06.875560 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:30:06.875565 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:30:06.875571 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:30:06.875576 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:30:06.875581 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:30:06.875587 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:30:06.875592 | orchestrator | 2026-03-10 00:30:06.875598 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-10 00:30:06.875603 | orchestrator | Tuesday 10 March 2026 00:30:05 +0000 (0:00:00.660) 0:03:57.510 ********* 2026-03-10 00:30:06.875609 | orchestrator | ok: [testbed-manager] 2026-03-10 00:30:06.875615 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:30:06.875620 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:30:06.875626 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:30:06.875631 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:30:06.875637 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:30:06.875642 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:30:06.875647 | orchestrator | 2026-03-10 00:30:06.875653 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-10 00:30:06.875663 | orchestrator | Tuesday 10 March 2026 00:30:05 +0000 (0:00:00.730) 0:03:58.240 ********* 2026-03-10 00:30:06.875673 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773101083.5571313, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 00:30:06.875681 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773101110.0769882, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 00:30:06.875687 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773101111.0362322, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 00:30:06.875705 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773101108.5874486, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 00:30:11.932717 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773101114.304365, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 00:30:11.932904 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773101112.2986753, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 00:30:11.932938 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773101107.863153, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 00:30:11.933724 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 00:30:11.933795 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 00:30:11.933809 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 00:30:11.933821 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 00:30:11.933865 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 00:30:11.933878 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 00:30:11.933889 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 00:30:11.933913 | orchestrator | 2026-03-10 00:30:11.933927 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-10 00:30:11.933940 | orchestrator | Tuesday 10 March 2026 00:30:06 +0000 (0:00:01.065) 0:03:59.306 ********* 2026-03-10 00:30:11.933964 | orchestrator | changed: [testbed-manager] 2026-03-10 00:30:11.933977 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:30:11.933987 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:30:11.933998 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:30:11.934009 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:30:11.934211 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:30:11.934226 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:30:11.934237 | orchestrator | 2026-03-10 00:30:11.934248 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-10 00:30:11.934259 | orchestrator | Tuesday 10 March 2026 00:30:07 +0000 (0:00:01.128) 0:04:00.434 ********* 2026-03-10 00:30:11.934270 | orchestrator | changed: [testbed-manager] 2026-03-10 00:30:11.934280 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:30:11.934291 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:30:11.934301 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:30:11.934311 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:30:11.934322 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:30:11.934332 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:30:11.934342 | orchestrator | 2026-03-10 00:30:11.934360 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-10 00:30:11.934371 | orchestrator | Tuesday 10 March 2026 00:30:09 +0000 (0:00:01.201) 0:04:01.636 ********* 2026-03-10 00:30:11.934382 | orchestrator | changed: [testbed-manager] 2026-03-10 00:30:11.934392 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:30:11.934403 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:30:11.934413 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:30:11.934424 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:30:11.934434 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:30:11.934444 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:30:11.934455 | orchestrator | 2026-03-10 00:30:11.934465 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-10 00:30:11.934476 | orchestrator | Tuesday 10 March 2026 00:30:10 +0000 (0:00:01.157) 0:04:02.793 ********* 2026-03-10 00:30:11.934486 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:30:11.934497 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:30:11.934508 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:30:11.934518 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:30:11.934529 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:30:11.934539 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:30:11.934549 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:30:11.934560 | orchestrator | 2026-03-10 00:30:11.934571 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-10 00:30:11.934581 | orchestrator | Tuesday 10 March 2026 00:30:10 +0000 (0:00:00.331) 0:04:03.125 ********* 2026-03-10 00:30:11.934592 | orchestrator | ok: [testbed-manager] 2026-03-10 00:30:11.934604 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:30:11.934614 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:30:11.934625 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:30:11.934635 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:30:11.934645 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:30:11.934656 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:30:11.934666 | orchestrator | 2026-03-10 00:30:11.934686 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-10 00:30:11.934706 | orchestrator | Tuesday 10 March 2026 00:30:11 +0000 (0:00:00.838) 0:04:03.963 ********* 2026-03-10 00:30:11.934727 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:30:11.934761 | orchestrator | 2026-03-10 00:30:11.934819 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-10 00:30:11.934851 | orchestrator | Tuesday 10 March 2026 00:30:11 +0000 (0:00:00.407) 0:04:04.371 ********* 2026-03-10 00:31:32.064158 | orchestrator | ok: [testbed-manager] 2026-03-10 00:31:32.064288 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:31:32.064307 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:31:32.064319 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:31:32.064330 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:31:32.064340 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:31:32.064352 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:31:32.064363 | orchestrator | 2026-03-10 00:31:32.064375 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-10 00:31:32.064388 | orchestrator | Tuesday 10 March 2026 00:30:20 +0000 (0:00:08.227) 0:04:12.598 ********* 2026-03-10 00:31:32.064399 | orchestrator | ok: [testbed-manager] 2026-03-10 00:31:32.064410 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:31:32.064420 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:31:32.064431 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:31:32.064442 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:31:32.064453 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:31:32.064463 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:31:32.064474 | orchestrator | 2026-03-10 00:31:32.064485 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-10 00:31:32.064496 | orchestrator | Tuesday 10 March 2026 00:30:21 +0000 (0:00:01.231) 0:04:13.830 ********* 2026-03-10 00:31:32.064507 | orchestrator | ok: [testbed-manager] 2026-03-10 00:31:32.064518 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:31:32.064529 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:31:32.064540 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:31:32.064550 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:31:32.064561 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:31:32.064571 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:31:32.064582 | orchestrator | 2026-03-10 00:31:32.064593 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-10 00:31:32.064604 | orchestrator | Tuesday 10 March 2026 00:30:22 +0000 (0:00:01.151) 0:04:14.981 ********* 2026-03-10 00:31:32.064615 | orchestrator | ok: [testbed-manager] 2026-03-10 00:31:32.064625 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:31:32.064636 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:31:32.064647 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:31:32.064658 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:31:32.064669 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:31:32.064680 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:31:32.064693 | orchestrator | 2026-03-10 00:31:32.064705 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-10 00:31:32.064718 | orchestrator | Tuesday 10 March 2026 00:30:22 +0000 (0:00:00.275) 0:04:15.257 ********* 2026-03-10 00:31:32.064731 | orchestrator | ok: [testbed-manager] 2026-03-10 00:31:32.064770 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:31:32.064785 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:31:32.064798 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:31:32.064810 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:31:32.064822 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:31:32.064833 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:31:32.064843 | orchestrator | 2026-03-10 00:31:32.064854 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-10 00:31:32.064865 | orchestrator | Tuesday 10 March 2026 00:30:23 +0000 (0:00:00.299) 0:04:15.556 ********* 2026-03-10 00:31:32.064876 | orchestrator | ok: [testbed-manager] 2026-03-10 00:31:32.064886 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:31:32.064897 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:31:32.064908 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:31:32.064941 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:31:32.064952 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:31:32.064962 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:31:32.064973 | orchestrator | 2026-03-10 00:31:32.064984 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-10 00:31:32.064995 | orchestrator | Tuesday 10 March 2026 00:30:23 +0000 (0:00:00.293) 0:04:15.850 ********* 2026-03-10 00:31:32.065005 | orchestrator | ok: [testbed-manager] 2026-03-10 00:31:32.065016 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:31:32.065026 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:31:32.065037 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:31:32.065048 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:31:32.065058 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:31:32.065069 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:31:32.065079 | orchestrator | 2026-03-10 00:31:32.065090 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-10 00:31:32.065101 | orchestrator | Tuesday 10 March 2026 00:30:29 +0000 (0:00:05.702) 0:04:21.552 ********* 2026-03-10 00:31:32.065115 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:31:32.065137 | orchestrator | 2026-03-10 00:31:32.065156 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-10 00:31:32.065175 | orchestrator | Tuesday 10 March 2026 00:30:29 +0000 (0:00:00.416) 0:04:21.969 ********* 2026-03-10 00:31:32.065194 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-10 00:31:32.065213 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-10 00:31:32.065231 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-10 00:31:32.065249 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-10 00:31:32.065260 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:31:32.065271 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:31:32.065299 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-10 00:31:32.065310 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-10 00:31:32.065321 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-10 00:31:32.065332 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-10 00:31:32.065342 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:31:32.065353 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-10 00:31:32.065364 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:31:32.065374 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-10 00:31:32.065385 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-10 00:31:32.065396 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-10 00:31:32.065424 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:31:32.065436 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:31:32.065446 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-10 00:31:32.065457 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-10 00:31:32.065467 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:31:32.065478 | orchestrator | 2026-03-10 00:31:32.065488 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-10 00:31:32.065499 | orchestrator | Tuesday 10 March 2026 00:30:29 +0000 (0:00:00.359) 0:04:22.329 ********* 2026-03-10 00:31:32.065510 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:31:32.065522 | orchestrator | 2026-03-10 00:31:32.065532 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-10 00:31:32.065552 | orchestrator | Tuesday 10 March 2026 00:30:30 +0000 (0:00:00.423) 0:04:22.752 ********* 2026-03-10 00:31:32.065563 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-10 00:31:32.065574 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-10 00:31:32.065585 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:31:32.065595 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-10 00:31:32.065606 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:31:32.065616 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-10 00:31:32.065627 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:31:32.065637 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-10 00:31:32.065648 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:31:32.065658 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-10 00:31:32.065669 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:31:32.065680 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:31:32.065690 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-10 00:31:32.065700 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:31:32.065711 | orchestrator | 2026-03-10 00:31:32.065722 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-10 00:31:32.065733 | orchestrator | Tuesday 10 March 2026 00:30:30 +0000 (0:00:00.327) 0:04:23.080 ********* 2026-03-10 00:31:32.065772 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:31:32.065785 | orchestrator | 2026-03-10 00:31:32.065796 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-10 00:31:32.065807 | orchestrator | Tuesday 10 March 2026 00:30:31 +0000 (0:00:00.426) 0:04:23.507 ********* 2026-03-10 00:31:32.065817 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:31:32.065828 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:31:32.065838 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:31:32.065849 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:31:32.065866 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:31:32.065877 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:31:32.065887 | orchestrator | changed: [testbed-manager] 2026-03-10 00:31:32.065898 | orchestrator | 2026-03-10 00:31:32.065909 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-10 00:31:32.065919 | orchestrator | Tuesday 10 March 2026 00:31:06 +0000 (0:00:35.086) 0:04:58.593 ********* 2026-03-10 00:31:32.065930 | orchestrator | changed: [testbed-manager] 2026-03-10 00:31:32.065941 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:31:32.065951 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:31:32.065962 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:31:32.065972 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:31:32.065983 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:31:32.065993 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:31:32.066004 | orchestrator | 2026-03-10 00:31:32.066014 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-10 00:31:32.066094 | orchestrator | Tuesday 10 March 2026 00:31:15 +0000 (0:00:09.415) 0:05:08.009 ********* 2026-03-10 00:31:32.066105 | orchestrator | changed: [testbed-manager] 2026-03-10 00:31:32.066115 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:31:32.066126 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:31:32.066136 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:31:32.066147 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:31:32.066158 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:31:32.066168 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:31:32.066186 | orchestrator | 2026-03-10 00:31:32.066206 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-10 00:31:32.066237 | orchestrator | Tuesday 10 March 2026 00:31:23 +0000 (0:00:08.328) 0:05:16.338 ********* 2026-03-10 00:31:32.066257 | orchestrator | ok: [testbed-manager] 2026-03-10 00:31:32.066275 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:31:32.066289 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:31:32.066299 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:31:32.066310 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:31:32.066321 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:31:32.066331 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:31:32.066342 | orchestrator | 2026-03-10 00:31:32.066353 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-10 00:31:32.066363 | orchestrator | Tuesday 10 March 2026 00:31:25 +0000 (0:00:01.800) 0:05:18.138 ********* 2026-03-10 00:31:32.066374 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:31:32.066384 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:31:32.066395 | orchestrator | changed: [testbed-manager] 2026-03-10 00:31:32.066405 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:31:32.066416 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:31:32.066426 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:31:32.066437 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:31:32.066448 | orchestrator | 2026-03-10 00:31:32.066467 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-10 00:31:43.188930 | orchestrator | Tuesday 10 March 2026 00:31:32 +0000 (0:00:06.351) 0:05:24.489 ********* 2026-03-10 00:31:43.189043 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:31:43.189062 | orchestrator | 2026-03-10 00:31:43.189075 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-10 00:31:43.189086 | orchestrator | Tuesday 10 March 2026 00:31:32 +0000 (0:00:00.549) 0:05:25.039 ********* 2026-03-10 00:31:43.189096 | orchestrator | changed: [testbed-manager] 2026-03-10 00:31:43.189107 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:31:43.189117 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:31:43.189126 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:31:43.189136 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:31:43.189146 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:31:43.189155 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:31:43.189165 | orchestrator | 2026-03-10 00:31:43.189175 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-10 00:31:43.189185 | orchestrator | Tuesday 10 March 2026 00:31:33 +0000 (0:00:00.702) 0:05:25.741 ********* 2026-03-10 00:31:43.189195 | orchestrator | ok: [testbed-manager] 2026-03-10 00:31:43.189206 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:31:43.189216 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:31:43.189225 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:31:43.189235 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:31:43.189244 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:31:43.189253 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:31:43.189263 | orchestrator | 2026-03-10 00:31:43.189272 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-10 00:31:43.189282 | orchestrator | Tuesday 10 March 2026 00:31:34 +0000 (0:00:01.675) 0:05:27.416 ********* 2026-03-10 00:31:43.189292 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:31:43.189301 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:31:43.189311 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:31:43.189320 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:31:43.189330 | orchestrator | changed: [testbed-manager] 2026-03-10 00:31:43.189340 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:31:43.189350 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:31:43.189359 | orchestrator | 2026-03-10 00:31:43.189369 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-10 00:31:43.189379 | orchestrator | Tuesday 10 March 2026 00:31:35 +0000 (0:00:00.825) 0:05:28.242 ********* 2026-03-10 00:31:43.189407 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:31:43.189417 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:31:43.189426 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:31:43.189435 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:31:43.189445 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:31:43.189454 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:31:43.189464 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:31:43.189473 | orchestrator | 2026-03-10 00:31:43.189485 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-10 00:31:43.189500 | orchestrator | Tuesday 10 March 2026 00:31:36 +0000 (0:00:00.269) 0:05:28.512 ********* 2026-03-10 00:31:43.189517 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:31:43.189527 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:31:43.189536 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:31:43.189554 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:31:43.189564 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:31:43.189573 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:31:43.189582 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:31:43.189592 | orchestrator | 2026-03-10 00:31:43.189601 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-10 00:31:43.189611 | orchestrator | Tuesday 10 March 2026 00:31:36 +0000 (0:00:00.375) 0:05:28.887 ********* 2026-03-10 00:31:43.189620 | orchestrator | ok: [testbed-manager] 2026-03-10 00:31:43.189630 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:31:43.189639 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:31:43.189649 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:31:43.189658 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:31:43.189667 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:31:43.189677 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:31:43.189686 | orchestrator | 2026-03-10 00:31:43.189696 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-10 00:31:43.189705 | orchestrator | Tuesday 10 March 2026 00:31:36 +0000 (0:00:00.290) 0:05:29.178 ********* 2026-03-10 00:31:43.189714 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:31:43.189724 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:31:43.189733 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:31:43.189793 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:31:43.189804 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:31:43.189813 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:31:43.189822 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:31:43.189832 | orchestrator | 2026-03-10 00:31:43.189842 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-10 00:31:43.189852 | orchestrator | Tuesday 10 March 2026 00:31:37 +0000 (0:00:00.299) 0:05:29.477 ********* 2026-03-10 00:31:43.189862 | orchestrator | ok: [testbed-manager] 2026-03-10 00:31:43.189871 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:31:43.189881 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:31:43.189894 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:31:43.189910 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:31:43.189920 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:31:43.189929 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:31:43.189939 | orchestrator | 2026-03-10 00:31:43.189949 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-10 00:31:43.189958 | orchestrator | Tuesday 10 March 2026 00:31:37 +0000 (0:00:00.314) 0:05:29.792 ********* 2026-03-10 00:31:43.189968 | orchestrator | ok: [testbed-manager] =>  2026-03-10 00:31:43.189978 | orchestrator |  docker_version: 5:27.5.1 2026-03-10 00:31:43.189987 | orchestrator | ok: [testbed-node-3] =>  2026-03-10 00:31:43.189997 | orchestrator |  docker_version: 5:27.5.1 2026-03-10 00:31:43.190006 | orchestrator | ok: [testbed-node-4] =>  2026-03-10 00:31:43.190084 | orchestrator |  docker_version: 5:27.5.1 2026-03-10 00:31:43.190098 | orchestrator | ok: [testbed-node-5] =>  2026-03-10 00:31:43.190108 | orchestrator |  docker_version: 5:27.5.1 2026-03-10 00:31:43.190133 | orchestrator | ok: [testbed-node-0] =>  2026-03-10 00:31:43.190153 | orchestrator |  docker_version: 5:27.5.1 2026-03-10 00:31:43.190162 | orchestrator | ok: [testbed-node-1] =>  2026-03-10 00:31:43.190172 | orchestrator |  docker_version: 5:27.5.1 2026-03-10 00:31:43.190181 | orchestrator | ok: [testbed-node-2] =>  2026-03-10 00:31:43.190190 | orchestrator |  docker_version: 5:27.5.1 2026-03-10 00:31:43.190200 | orchestrator | 2026-03-10 00:31:43.190209 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-10 00:31:43.190219 | orchestrator | Tuesday 10 March 2026 00:31:37 +0000 (0:00:00.304) 0:05:30.097 ********* 2026-03-10 00:31:43.190228 | orchestrator | ok: [testbed-manager] =>  2026-03-10 00:31:43.190238 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-10 00:31:43.190247 | orchestrator | ok: [testbed-node-3] =>  2026-03-10 00:31:43.190257 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-10 00:31:43.190266 | orchestrator | ok: [testbed-node-4] =>  2026-03-10 00:31:43.190276 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-10 00:31:43.190285 | orchestrator | ok: [testbed-node-5] =>  2026-03-10 00:31:43.190294 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-10 00:31:43.190304 | orchestrator | ok: [testbed-node-0] =>  2026-03-10 00:31:43.190313 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-10 00:31:43.190322 | orchestrator | ok: [testbed-node-1] =>  2026-03-10 00:31:43.190332 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-10 00:31:43.190341 | orchestrator | ok: [testbed-node-2] =>  2026-03-10 00:31:43.190351 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-10 00:31:43.190360 | orchestrator | 2026-03-10 00:31:43.190370 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-10 00:31:43.190379 | orchestrator | Tuesday 10 March 2026 00:31:37 +0000 (0:00:00.303) 0:05:30.400 ********* 2026-03-10 00:31:43.190389 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:31:43.190398 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:31:43.190408 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:31:43.190417 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:31:43.190426 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:31:43.190436 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:31:43.190445 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:31:43.190455 | orchestrator | 2026-03-10 00:31:43.190464 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-10 00:31:43.190474 | orchestrator | Tuesday 10 March 2026 00:31:38 +0000 (0:00:00.268) 0:05:30.668 ********* 2026-03-10 00:31:43.190483 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:31:43.190492 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:31:43.190502 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:31:43.190511 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:31:43.190520 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:31:43.190530 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:31:43.190539 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:31:43.190549 | orchestrator | 2026-03-10 00:31:43.190558 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-10 00:31:43.190568 | orchestrator | Tuesday 10 March 2026 00:31:38 +0000 (0:00:00.266) 0:05:30.934 ********* 2026-03-10 00:31:43.190580 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:31:43.190591 | orchestrator | 2026-03-10 00:31:43.190606 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-10 00:31:43.190621 | orchestrator | Tuesday 10 March 2026 00:31:38 +0000 (0:00:00.437) 0:05:31.371 ********* 2026-03-10 00:31:43.190636 | orchestrator | ok: [testbed-manager] 2026-03-10 00:31:43.190646 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:31:43.190656 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:31:43.190672 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:31:43.190682 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:31:43.190700 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:31:43.190710 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:31:43.190722 | orchestrator | 2026-03-10 00:31:43.190754 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-10 00:31:43.190764 | orchestrator | Tuesday 10 March 2026 00:31:39 +0000 (0:00:00.949) 0:05:32.321 ********* 2026-03-10 00:31:43.190774 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:31:43.190783 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:31:43.190792 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:31:43.190802 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:31:43.190811 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:31:43.190820 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:31:43.190830 | orchestrator | ok: [testbed-manager] 2026-03-10 00:31:43.190847 | orchestrator | 2026-03-10 00:31:43.190858 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-10 00:31:43.190868 | orchestrator | Tuesday 10 March 2026 00:31:42 +0000 (0:00:02.931) 0:05:35.252 ********* 2026-03-10 00:31:43.190878 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-10 00:31:43.190887 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-10 00:31:43.190896 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-10 00:31:43.190906 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-10 00:31:43.190915 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-10 00:31:43.190925 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-10 00:31:43.190934 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:31:43.190943 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-10 00:31:43.190953 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-10 00:31:43.190962 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:31:43.190971 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-10 00:31:43.190980 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-10 00:31:43.190990 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-10 00:31:43.190999 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-10 00:31:43.191008 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:31:43.191018 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-10 00:31:43.191035 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-10 00:32:46.735302 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-10 00:32:46.735437 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:32:46.735452 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-10 00:32:46.735463 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-10 00:32:46.735474 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-10 00:32:46.735485 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:32:46.735496 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:32:46.735506 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-10 00:32:46.735517 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-10 00:32:46.735528 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-10 00:32:46.735538 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:32:46.735549 | orchestrator | 2026-03-10 00:32:46.735561 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-10 00:32:46.735573 | orchestrator | Tuesday 10 March 2026 00:31:43 +0000 (0:00:00.579) 0:05:35.832 ********* 2026-03-10 00:32:46.735584 | orchestrator | ok: [testbed-manager] 2026-03-10 00:32:46.735595 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:32:46.735606 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:32:46.735616 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:32:46.735628 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:32:46.735639 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:32:46.735649 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:32:46.735684 | orchestrator | 2026-03-10 00:32:46.735696 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-10 00:32:46.735706 | orchestrator | Tuesday 10 March 2026 00:31:50 +0000 (0:00:06.705) 0:05:42.537 ********* 2026-03-10 00:32:46.735717 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:32:46.735777 | orchestrator | ok: [testbed-manager] 2026-03-10 00:32:46.735789 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:32:46.735799 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:32:46.735810 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:32:46.735833 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:32:46.735846 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:32:46.735858 | orchestrator | 2026-03-10 00:32:46.735880 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-10 00:32:46.735893 | orchestrator | Tuesday 10 March 2026 00:31:51 +0000 (0:00:01.041) 0:05:43.578 ********* 2026-03-10 00:32:46.735904 | orchestrator | ok: [testbed-manager] 2026-03-10 00:32:46.735916 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:32:46.735928 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:32:46.735939 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:32:46.735951 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:32:46.735962 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:32:46.735974 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:32:46.735986 | orchestrator | 2026-03-10 00:32:46.735997 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-10 00:32:46.736009 | orchestrator | Tuesday 10 March 2026 00:31:59 +0000 (0:00:08.151) 0:05:51.730 ********* 2026-03-10 00:32:46.736021 | orchestrator | changed: [testbed-manager] 2026-03-10 00:32:46.736033 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:32:46.736045 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:32:46.736057 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:32:46.736069 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:32:46.736080 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:32:46.736093 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:32:46.736104 | orchestrator | 2026-03-10 00:32:46.736116 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-10 00:32:46.736128 | orchestrator | Tuesday 10 March 2026 00:32:02 +0000 (0:00:03.273) 0:05:55.003 ********* 2026-03-10 00:32:46.736140 | orchestrator | ok: [testbed-manager] 2026-03-10 00:32:46.736152 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:32:46.736164 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:32:46.736176 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:32:46.736188 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:32:46.736199 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:32:46.736209 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:32:46.736220 | orchestrator | 2026-03-10 00:32:46.736230 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-10 00:32:46.736241 | orchestrator | Tuesday 10 March 2026 00:32:03 +0000 (0:00:01.331) 0:05:56.335 ********* 2026-03-10 00:32:46.736251 | orchestrator | ok: [testbed-manager] 2026-03-10 00:32:46.736268 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:32:46.736286 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:32:46.736303 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:32:46.736319 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:32:46.736336 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:32:46.736353 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:32:46.736371 | orchestrator | 2026-03-10 00:32:46.736388 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-10 00:32:46.736407 | orchestrator | Tuesday 10 March 2026 00:32:05 +0000 (0:00:01.570) 0:05:57.905 ********* 2026-03-10 00:32:46.736424 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:32:46.736441 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:32:46.736459 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:32:46.736477 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:32:46.736508 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:32:46.736520 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:32:46.736530 | orchestrator | changed: [testbed-manager] 2026-03-10 00:32:46.736541 | orchestrator | 2026-03-10 00:32:46.736551 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-10 00:32:46.736562 | orchestrator | Tuesday 10 March 2026 00:32:06 +0000 (0:00:00.644) 0:05:58.549 ********* 2026-03-10 00:32:46.736572 | orchestrator | ok: [testbed-manager] 2026-03-10 00:32:46.736583 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:32:46.736593 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:32:46.736604 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:32:46.736614 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:32:46.736624 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:32:46.736635 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:32:46.736645 | orchestrator | 2026-03-10 00:32:46.736656 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-10 00:32:46.736685 | orchestrator | Tuesday 10 March 2026 00:32:15 +0000 (0:00:09.873) 0:06:08.422 ********* 2026-03-10 00:32:46.736696 | orchestrator | changed: [testbed-manager] 2026-03-10 00:32:46.736707 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:32:46.736717 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:32:46.736793 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:32:46.736805 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:32:46.736815 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:32:46.736825 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:32:46.736835 | orchestrator | 2026-03-10 00:32:46.736845 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-10 00:32:46.736854 | orchestrator | Tuesday 10 March 2026 00:32:16 +0000 (0:00:00.931) 0:06:09.353 ********* 2026-03-10 00:32:46.736863 | orchestrator | ok: [testbed-manager] 2026-03-10 00:32:46.736873 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:32:46.736882 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:32:46.736891 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:32:46.736901 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:32:46.736910 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:32:46.736919 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:32:46.736928 | orchestrator | 2026-03-10 00:32:46.736937 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-10 00:32:46.736947 | orchestrator | Tuesday 10 March 2026 00:32:27 +0000 (0:00:10.263) 0:06:19.617 ********* 2026-03-10 00:32:46.736956 | orchestrator | ok: [testbed-manager] 2026-03-10 00:32:46.736965 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:32:46.736975 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:32:46.736984 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:32:46.736993 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:32:46.737002 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:32:46.737011 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:32:46.737020 | orchestrator | 2026-03-10 00:32:46.737030 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-10 00:32:46.737039 | orchestrator | Tuesday 10 March 2026 00:32:39 +0000 (0:00:12.004) 0:06:31.621 ********* 2026-03-10 00:32:46.737048 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-10 00:32:46.737058 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-10 00:32:46.737067 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-10 00:32:46.737076 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-10 00:32:46.737085 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-10 00:32:46.737094 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-10 00:32:46.737104 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-10 00:32:46.737113 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-10 00:32:46.737122 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-10 00:32:46.737131 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-10 00:32:46.737148 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-10 00:32:46.737201 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-10 00:32:46.737212 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-10 00:32:46.737221 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-10 00:32:46.737230 | orchestrator | 2026-03-10 00:32:46.737240 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-10 00:32:46.737249 | orchestrator | Tuesday 10 March 2026 00:32:40 +0000 (0:00:01.194) 0:06:32.816 ********* 2026-03-10 00:32:46.737263 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:32:46.737272 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:32:46.737282 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:32:46.737291 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:32:46.737300 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:32:46.737309 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:32:46.737318 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:32:46.737327 | orchestrator | 2026-03-10 00:32:46.737337 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-10 00:32:46.737346 | orchestrator | Tuesday 10 March 2026 00:32:40 +0000 (0:00:00.581) 0:06:33.397 ********* 2026-03-10 00:32:46.737355 | orchestrator | ok: [testbed-manager] 2026-03-10 00:32:46.737365 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:32:46.737374 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:32:46.737383 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:32:46.737392 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:32:46.737402 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:32:46.737411 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:32:46.737420 | orchestrator | 2026-03-10 00:32:46.737430 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-10 00:32:46.737440 | orchestrator | Tuesday 10 March 2026 00:32:45 +0000 (0:00:04.765) 0:06:38.163 ********* 2026-03-10 00:32:46.737449 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:32:46.737458 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:32:46.737467 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:32:46.737476 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:32:46.737486 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:32:46.737495 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:32:46.737504 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:32:46.737513 | orchestrator | 2026-03-10 00:32:46.737523 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-10 00:32:46.737533 | orchestrator | Tuesday 10 March 2026 00:32:46 +0000 (0:00:00.527) 0:06:38.690 ********* 2026-03-10 00:32:46.737542 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-10 00:32:46.737552 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-10 00:32:46.737561 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:32:46.737570 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-10 00:32:46.737580 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-10 00:32:46.737589 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:32:46.737598 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-10 00:32:46.737607 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-10 00:32:46.737616 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:32:46.737633 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-10 00:33:06.522356 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-10 00:33:06.522504 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:33:06.522531 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-10 00:33:06.522550 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-10 00:33:06.522568 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:33:06.522617 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-10 00:33:06.522636 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-10 00:33:06.522653 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:33:06.522670 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-10 00:33:06.522687 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-10 00:33:06.522704 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:33:06.522766 | orchestrator | 2026-03-10 00:33:06.522789 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-10 00:33:06.522808 | orchestrator | Tuesday 10 March 2026 00:32:46 +0000 (0:00:00.750) 0:06:39.441 ********* 2026-03-10 00:33:06.522824 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:33:06.522841 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:33:06.522857 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:33:06.522875 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:33:06.522891 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:33:06.522907 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:33:06.522924 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:33:06.522941 | orchestrator | 2026-03-10 00:33:06.522960 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-10 00:33:06.522980 | orchestrator | Tuesday 10 March 2026 00:32:47 +0000 (0:00:00.536) 0:06:39.977 ********* 2026-03-10 00:33:06.522998 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:33:06.523018 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:33:06.523035 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:33:06.523053 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:33:06.523071 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:33:06.523088 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:33:06.523105 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:33:06.523122 | orchestrator | 2026-03-10 00:33:06.523140 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-10 00:33:06.523159 | orchestrator | Tuesday 10 March 2026 00:32:48 +0000 (0:00:00.579) 0:06:40.557 ********* 2026-03-10 00:33:06.523178 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:33:06.523194 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:33:06.523209 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:33:06.523227 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:33:06.523245 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:33:06.523262 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:33:06.523279 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:33:06.523295 | orchestrator | 2026-03-10 00:33:06.523312 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-10 00:33:06.523327 | orchestrator | Tuesday 10 March 2026 00:32:48 +0000 (0:00:00.545) 0:06:41.102 ********* 2026-03-10 00:33:06.523343 | orchestrator | ok: [testbed-manager] 2026-03-10 00:33:06.523360 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:33:06.523377 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:33:06.523392 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:33:06.523408 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:33:06.523424 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:33:06.523440 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:33:06.523457 | orchestrator | 2026-03-10 00:33:06.523473 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-10 00:33:06.523492 | orchestrator | Tuesday 10 March 2026 00:32:50 +0000 (0:00:01.979) 0:06:43.082 ********* 2026-03-10 00:33:06.523511 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:33:06.523532 | orchestrator | 2026-03-10 00:33:06.523549 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-10 00:33:06.523567 | orchestrator | Tuesday 10 March 2026 00:32:51 +0000 (0:00:00.953) 0:06:44.035 ********* 2026-03-10 00:33:06.523614 | orchestrator | ok: [testbed-manager] 2026-03-10 00:33:06.523631 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:33:06.523648 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:33:06.523665 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:33:06.523682 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:33:06.523699 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:33:06.523715 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:33:06.523764 | orchestrator | 2026-03-10 00:33:06.523781 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-10 00:33:06.523798 | orchestrator | Tuesday 10 March 2026 00:32:52 +0000 (0:00:00.840) 0:06:44.875 ********* 2026-03-10 00:33:06.523814 | orchestrator | ok: [testbed-manager] 2026-03-10 00:33:06.523829 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:33:06.523845 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:33:06.523860 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:33:06.523875 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:33:06.523891 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:33:06.523906 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:33:06.523922 | orchestrator | 2026-03-10 00:33:06.523938 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-10 00:33:06.523954 | orchestrator | Tuesday 10 March 2026 00:32:53 +0000 (0:00:00.887) 0:06:45.763 ********* 2026-03-10 00:33:06.523970 | orchestrator | ok: [testbed-manager] 2026-03-10 00:33:06.523986 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:33:06.524001 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:33:06.524016 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:33:06.524033 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:33:06.524049 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:33:06.524064 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:33:06.524080 | orchestrator | 2026-03-10 00:33:06.524097 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-10 00:33:06.524146 | orchestrator | Tuesday 10 March 2026 00:32:54 +0000 (0:00:01.625) 0:06:47.388 ********* 2026-03-10 00:33:06.524167 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:33:06.524186 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:33:06.524205 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:33:06.524223 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:33:06.524240 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:33:06.524258 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:33:06.524278 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:33:06.524295 | orchestrator | 2026-03-10 00:33:06.524313 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-10 00:33:06.524330 | orchestrator | Tuesday 10 March 2026 00:32:56 +0000 (0:00:01.395) 0:06:48.783 ********* 2026-03-10 00:33:06.524349 | orchestrator | ok: [testbed-manager] 2026-03-10 00:33:06.524366 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:33:06.524384 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:33:06.524403 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:33:06.524422 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:33:06.524440 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:33:06.524457 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:33:06.524468 | orchestrator | 2026-03-10 00:33:06.524479 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-10 00:33:06.524492 | orchestrator | Tuesday 10 March 2026 00:32:57 +0000 (0:00:01.400) 0:06:50.183 ********* 2026-03-10 00:33:06.524510 | orchestrator | changed: [testbed-manager] 2026-03-10 00:33:06.524528 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:33:06.524547 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:33:06.524564 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:33:06.524582 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:33:06.524600 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:33:06.524617 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:33:06.524635 | orchestrator | 2026-03-10 00:33:06.524672 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-10 00:33:06.524689 | orchestrator | Tuesday 10 March 2026 00:32:59 +0000 (0:00:01.468) 0:06:51.652 ********* 2026-03-10 00:33:06.524706 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:33:06.524768 | orchestrator | 2026-03-10 00:33:06.524785 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-10 00:33:06.524802 | orchestrator | Tuesday 10 March 2026 00:33:00 +0000 (0:00:01.081) 0:06:52.734 ********* 2026-03-10 00:33:06.524820 | orchestrator | ok: [testbed-manager] 2026-03-10 00:33:06.524839 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:33:06.524857 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:33:06.524876 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:33:06.524895 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:33:06.524912 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:33:06.524928 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:33:06.524945 | orchestrator | 2026-03-10 00:33:06.524962 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-10 00:33:06.524979 | orchestrator | Tuesday 10 March 2026 00:33:01 +0000 (0:00:01.368) 0:06:54.102 ********* 2026-03-10 00:33:06.524995 | orchestrator | ok: [testbed-manager] 2026-03-10 00:33:06.525036 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:33:06.525068 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:33:06.525087 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:33:06.525105 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:33:06.525143 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:33:06.525162 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:33:06.525179 | orchestrator | 2026-03-10 00:33:06.525199 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-10 00:33:06.525218 | orchestrator | Tuesday 10 March 2026 00:33:02 +0000 (0:00:01.143) 0:06:55.245 ********* 2026-03-10 00:33:06.525236 | orchestrator | ok: [testbed-manager] 2026-03-10 00:33:06.525255 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:33:06.525274 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:33:06.525293 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:33:06.525310 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:33:06.525326 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:33:06.525343 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:33:06.525362 | orchestrator | 2026-03-10 00:33:06.525380 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-10 00:33:06.525398 | orchestrator | Tuesday 10 March 2026 00:33:03 +0000 (0:00:01.125) 0:06:56.371 ********* 2026-03-10 00:33:06.525416 | orchestrator | ok: [testbed-manager] 2026-03-10 00:33:06.525433 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:33:06.525450 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:33:06.525469 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:33:06.525487 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:33:06.525505 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:33:06.525522 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:33:06.525538 | orchestrator | 2026-03-10 00:33:06.525555 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-10 00:33:06.525573 | orchestrator | Tuesday 10 March 2026 00:33:05 +0000 (0:00:01.328) 0:06:57.699 ********* 2026-03-10 00:33:06.525591 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:33:06.525611 | orchestrator | 2026-03-10 00:33:06.525630 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-10 00:33:06.525649 | orchestrator | Tuesday 10 March 2026 00:33:06 +0000 (0:00:00.949) 0:06:58.649 ********* 2026-03-10 00:33:06.525667 | orchestrator | 2026-03-10 00:33:06.525685 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-10 00:33:06.525783 | orchestrator | Tuesday 10 March 2026 00:33:06 +0000 (0:00:00.040) 0:06:58.690 ********* 2026-03-10 00:33:06.525801 | orchestrator | 2026-03-10 00:33:06.525812 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-10 00:33:06.525823 | orchestrator | Tuesday 10 March 2026 00:33:06 +0000 (0:00:00.039) 0:06:58.729 ********* 2026-03-10 00:33:06.525834 | orchestrator | 2026-03-10 00:33:06.525845 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-10 00:33:06.525874 | orchestrator | Tuesday 10 March 2026 00:33:06 +0000 (0:00:00.047) 0:06:58.777 ********* 2026-03-10 00:33:33.480543 | orchestrator | 2026-03-10 00:33:33.480695 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-10 00:33:33.480809 | orchestrator | Tuesday 10 March 2026 00:33:06 +0000 (0:00:00.039) 0:06:58.816 ********* 2026-03-10 00:33:33.480830 | orchestrator | 2026-03-10 00:33:33.480850 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-10 00:33:33.480869 | orchestrator | Tuesday 10 March 2026 00:33:06 +0000 (0:00:00.039) 0:06:58.855 ********* 2026-03-10 00:33:33.480888 | orchestrator | 2026-03-10 00:33:33.480906 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-10 00:33:33.480924 | orchestrator | Tuesday 10 March 2026 00:33:06 +0000 (0:00:00.047) 0:06:58.902 ********* 2026-03-10 00:33:33.480941 | orchestrator | 2026-03-10 00:33:33.480959 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-10 00:33:33.480976 | orchestrator | Tuesday 10 March 2026 00:33:06 +0000 (0:00:00.039) 0:06:58.942 ********* 2026-03-10 00:33:33.480995 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:33:33.481014 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:33:33.481034 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:33:33.481053 | orchestrator | 2026-03-10 00:33:33.481074 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-10 00:33:33.481098 | orchestrator | Tuesday 10 March 2026 00:33:07 +0000 (0:00:01.181) 0:07:00.123 ********* 2026-03-10 00:33:33.481118 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:33:33.481139 | orchestrator | changed: [testbed-manager] 2026-03-10 00:33:33.481159 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:33:33.481178 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:33:33.481196 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:33:33.481226 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:33:33.481254 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:33:33.481282 | orchestrator | 2026-03-10 00:33:33.481306 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-10 00:33:33.481324 | orchestrator | Tuesday 10 March 2026 00:33:09 +0000 (0:00:01.445) 0:07:01.569 ********* 2026-03-10 00:33:33.481342 | orchestrator | changed: [testbed-manager] 2026-03-10 00:33:33.481360 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:33:33.481377 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:33:33.481395 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:33:33.481422 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:33:33.481452 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:33:33.481481 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:33:33.481509 | orchestrator | 2026-03-10 00:33:33.481538 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-10 00:33:33.481556 | orchestrator | Tuesday 10 March 2026 00:33:10 +0000 (0:00:01.167) 0:07:02.736 ********* 2026-03-10 00:33:33.481575 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:33:33.481593 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:33:33.481612 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:33:33.481629 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:33:33.481649 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:33:33.481669 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:33:33.481688 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:33:33.481708 | orchestrator | 2026-03-10 00:33:33.481761 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-10 00:33:33.481782 | orchestrator | Tuesday 10 March 2026 00:33:12 +0000 (0:00:02.407) 0:07:05.144 ********* 2026-03-10 00:33:33.481841 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:33:33.481860 | orchestrator | 2026-03-10 00:33:33.481901 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-10 00:33:33.481920 | orchestrator | Tuesday 10 March 2026 00:33:12 +0000 (0:00:00.125) 0:07:05.269 ********* 2026-03-10 00:33:33.481937 | orchestrator | ok: [testbed-manager] 2026-03-10 00:33:33.481956 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:33:33.481973 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:33:33.481990 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:33:33.482007 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:33:33.482111 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:33:33.482132 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:33:33.482149 | orchestrator | 2026-03-10 00:33:33.482166 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-10 00:33:33.482186 | orchestrator | Tuesday 10 March 2026 00:33:13 +0000 (0:00:00.990) 0:07:06.260 ********* 2026-03-10 00:33:33.482204 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:33:33.482223 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:33:33.482241 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:33:33.482259 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:33:33.482277 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:33:33.482294 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:33:33.482313 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:33:33.482331 | orchestrator | 2026-03-10 00:33:33.482350 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-10 00:33:33.482368 | orchestrator | Tuesday 10 March 2026 00:33:14 +0000 (0:00:00.543) 0:07:06.804 ********* 2026-03-10 00:33:33.482388 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:33:33.482410 | orchestrator | 2026-03-10 00:33:33.482429 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-10 00:33:33.482447 | orchestrator | Tuesday 10 March 2026 00:33:15 +0000 (0:00:01.158) 0:07:07.963 ********* 2026-03-10 00:33:33.482464 | orchestrator | ok: [testbed-manager] 2026-03-10 00:33:33.482482 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:33:33.482501 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:33:33.482520 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:33:33.482539 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:33:33.482558 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:33:33.482577 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:33:33.482595 | orchestrator | 2026-03-10 00:33:33.482613 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-10 00:33:33.482631 | orchestrator | Tuesday 10 March 2026 00:33:16 +0000 (0:00:00.872) 0:07:08.835 ********* 2026-03-10 00:33:33.482650 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-10 00:33:33.482698 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-10 00:33:33.482756 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-10 00:33:33.482773 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-10 00:33:33.482784 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-10 00:33:33.482795 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-10 00:33:33.482806 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-10 00:33:33.482816 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-10 00:33:33.482827 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-10 00:33:33.482838 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-10 00:33:33.482848 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-10 00:33:33.482859 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-10 00:33:33.482886 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-10 00:33:33.482897 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-10 00:33:33.482908 | orchestrator | 2026-03-10 00:33:33.482919 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-10 00:33:33.482929 | orchestrator | Tuesday 10 March 2026 00:33:18 +0000 (0:00:02.540) 0:07:11.375 ********* 2026-03-10 00:33:33.482940 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:33:33.482951 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:33:33.482961 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:33:33.482972 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:33:33.482982 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:33:33.482993 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:33:33.483003 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:33:33.483014 | orchestrator | 2026-03-10 00:33:33.483024 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-10 00:33:33.483035 | orchestrator | Tuesday 10 March 2026 00:33:19 +0000 (0:00:00.773) 0:07:12.149 ********* 2026-03-10 00:33:33.483049 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:33:33.483062 | orchestrator | 2026-03-10 00:33:33.483073 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-10 00:33:33.483084 | orchestrator | Tuesday 10 March 2026 00:33:20 +0000 (0:00:00.854) 0:07:13.003 ********* 2026-03-10 00:33:33.483094 | orchestrator | ok: [testbed-manager] 2026-03-10 00:33:33.483105 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:33:33.483116 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:33:33.483126 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:33:33.483137 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:33:33.483147 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:33:33.483156 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:33:33.483165 | orchestrator | 2026-03-10 00:33:33.483175 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-10 00:33:33.483185 | orchestrator | Tuesday 10 March 2026 00:33:21 +0000 (0:00:00.861) 0:07:13.865 ********* 2026-03-10 00:33:33.483203 | orchestrator | ok: [testbed-manager] 2026-03-10 00:33:33.483213 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:33:33.483223 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:33:33.483232 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:33:33.483241 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:33:33.483250 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:33:33.483260 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:33:33.483269 | orchestrator | 2026-03-10 00:33:33.483279 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-10 00:33:33.483289 | orchestrator | Tuesday 10 March 2026 00:33:22 +0000 (0:00:01.096) 0:07:14.961 ********* 2026-03-10 00:33:33.483299 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:33:33.483308 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:33:33.483317 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:33:33.483327 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:33:33.483336 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:33:33.483345 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:33:33.483355 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:33:33.483364 | orchestrator | 2026-03-10 00:33:33.483374 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-10 00:33:33.483383 | orchestrator | Tuesday 10 March 2026 00:33:23 +0000 (0:00:00.524) 0:07:15.486 ********* 2026-03-10 00:33:33.483393 | orchestrator | ok: [testbed-manager] 2026-03-10 00:33:33.483402 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:33:33.483411 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:33:33.483421 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:33:33.483430 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:33:33.483445 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:33:33.483455 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:33:33.483464 | orchestrator | 2026-03-10 00:33:33.483474 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-10 00:33:33.483483 | orchestrator | Tuesday 10 March 2026 00:33:24 +0000 (0:00:01.574) 0:07:17.060 ********* 2026-03-10 00:33:33.483493 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:33:33.483502 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:33:33.483511 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:33:33.483521 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:33:33.483530 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:33:33.483539 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:33:33.483549 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:33:33.483558 | orchestrator | 2026-03-10 00:33:33.483567 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-10 00:33:33.483577 | orchestrator | Tuesday 10 March 2026 00:33:25 +0000 (0:00:00.492) 0:07:17.553 ********* 2026-03-10 00:33:33.483587 | orchestrator | ok: [testbed-manager] 2026-03-10 00:33:33.483596 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:33:33.483605 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:33:33.483615 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:33:33.483624 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:33:33.483634 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:33:33.483651 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:34:06.422315 | orchestrator | 2026-03-10 00:34:06.422460 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-10 00:34:06.422478 | orchestrator | Tuesday 10 March 2026 00:33:33 +0000 (0:00:08.354) 0:07:25.908 ********* 2026-03-10 00:34:06.422491 | orchestrator | ok: [testbed-manager] 2026-03-10 00:34:06.422503 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:34:06.422515 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:34:06.422525 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:34:06.422536 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:34:06.422547 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:34:06.422558 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:34:06.422568 | orchestrator | 2026-03-10 00:34:06.422579 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-10 00:34:06.422591 | orchestrator | Tuesday 10 March 2026 00:33:35 +0000 (0:00:01.605) 0:07:27.513 ********* 2026-03-10 00:34:06.422601 | orchestrator | ok: [testbed-manager] 2026-03-10 00:34:06.422612 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:34:06.422622 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:34:06.422633 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:34:06.422644 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:34:06.422654 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:34:06.422665 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:34:06.422676 | orchestrator | 2026-03-10 00:34:06.422687 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-10 00:34:06.422697 | orchestrator | Tuesday 10 March 2026 00:33:36 +0000 (0:00:01.757) 0:07:29.270 ********* 2026-03-10 00:34:06.422749 | orchestrator | ok: [testbed-manager] 2026-03-10 00:34:06.422767 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:34:06.422784 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:34:06.422802 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:34:06.422817 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:34:06.422829 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:34:06.422843 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:34:06.422856 | orchestrator | 2026-03-10 00:34:06.422868 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-10 00:34:06.422881 | orchestrator | Tuesday 10 March 2026 00:33:38 +0000 (0:00:01.701) 0:07:30.972 ********* 2026-03-10 00:34:06.422894 | orchestrator | ok: [testbed-manager] 2026-03-10 00:34:06.422907 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:34:06.422920 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:34:06.422961 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:34:06.422974 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:34:06.422986 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:34:06.422999 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:34:06.423010 | orchestrator | 2026-03-10 00:34:06.423023 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-10 00:34:06.423035 | orchestrator | Tuesday 10 March 2026 00:33:39 +0000 (0:00:00.887) 0:07:31.859 ********* 2026-03-10 00:34:06.423047 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:34:06.423060 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:34:06.423073 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:34:06.423085 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:34:06.423099 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:34:06.423111 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:34:06.423123 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:34:06.423135 | orchestrator | 2026-03-10 00:34:06.423147 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-10 00:34:06.423159 | orchestrator | Tuesday 10 March 2026 00:33:40 +0000 (0:00:01.013) 0:07:32.873 ********* 2026-03-10 00:34:06.423169 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:34:06.423180 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:34:06.423190 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:34:06.423201 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:34:06.423212 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:34:06.423222 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:34:06.423233 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:34:06.423244 | orchestrator | 2026-03-10 00:34:06.423254 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-10 00:34:06.423265 | orchestrator | Tuesday 10 March 2026 00:33:40 +0000 (0:00:00.534) 0:07:33.407 ********* 2026-03-10 00:34:06.423276 | orchestrator | ok: [testbed-manager] 2026-03-10 00:34:06.423307 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:34:06.423319 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:34:06.423329 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:34:06.423340 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:34:06.423351 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:34:06.423361 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:34:06.423372 | orchestrator | 2026-03-10 00:34:06.423382 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-10 00:34:06.423393 | orchestrator | Tuesday 10 March 2026 00:33:41 +0000 (0:00:00.541) 0:07:33.949 ********* 2026-03-10 00:34:06.423404 | orchestrator | ok: [testbed-manager] 2026-03-10 00:34:06.423414 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:34:06.423425 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:34:06.423437 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:34:06.423447 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:34:06.423458 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:34:06.423468 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:34:06.423479 | orchestrator | 2026-03-10 00:34:06.423489 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-10 00:34:06.423500 | orchestrator | Tuesday 10 March 2026 00:33:42 +0000 (0:00:00.523) 0:07:34.472 ********* 2026-03-10 00:34:06.423511 | orchestrator | ok: [testbed-manager] 2026-03-10 00:34:06.423521 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:34:06.423532 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:34:06.423542 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:34:06.423553 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:34:06.423564 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:34:06.423574 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:34:06.423584 | orchestrator | 2026-03-10 00:34:06.423595 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-10 00:34:06.423606 | orchestrator | Tuesday 10 March 2026 00:33:42 +0000 (0:00:00.749) 0:07:35.222 ********* 2026-03-10 00:34:06.423617 | orchestrator | ok: [testbed-manager] 2026-03-10 00:34:06.423627 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:34:06.423648 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:34:06.423658 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:34:06.423669 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:34:06.423679 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:34:06.423689 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:34:06.423700 | orchestrator | 2026-03-10 00:34:06.423832 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-10 00:34:06.423846 | orchestrator | Tuesday 10 March 2026 00:33:48 +0000 (0:00:05.684) 0:07:40.907 ********* 2026-03-10 00:34:06.423857 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:34:06.423867 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:34:06.423878 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:34:06.423888 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:34:06.423899 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:34:06.423910 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:34:06.423920 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:34:06.423931 | orchestrator | 2026-03-10 00:34:06.423941 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-10 00:34:06.423952 | orchestrator | Tuesday 10 March 2026 00:33:49 +0000 (0:00:00.562) 0:07:41.469 ********* 2026-03-10 00:34:06.423964 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:34:06.423978 | orchestrator | 2026-03-10 00:34:06.423989 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-10 00:34:06.424000 | orchestrator | Tuesday 10 March 2026 00:33:50 +0000 (0:00:01.071) 0:07:42.540 ********* 2026-03-10 00:34:06.424011 | orchestrator | ok: [testbed-manager] 2026-03-10 00:34:06.424021 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:34:06.424032 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:34:06.424042 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:34:06.424053 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:34:06.424063 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:34:06.424074 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:34:06.424084 | orchestrator | 2026-03-10 00:34:06.424095 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-10 00:34:06.424105 | orchestrator | Tuesday 10 March 2026 00:33:52 +0000 (0:00:02.114) 0:07:44.654 ********* 2026-03-10 00:34:06.424116 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:34:06.424126 | orchestrator | ok: [testbed-manager] 2026-03-10 00:34:06.424137 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:34:06.424147 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:34:06.424158 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:34:06.424169 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:34:06.424179 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:34:06.424189 | orchestrator | 2026-03-10 00:34:06.424200 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-10 00:34:06.424211 | orchestrator | Tuesday 10 March 2026 00:33:53 +0000 (0:00:01.139) 0:07:45.794 ********* 2026-03-10 00:34:06.424221 | orchestrator | ok: [testbed-manager] 2026-03-10 00:34:06.424232 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:34:06.424242 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:34:06.424252 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:34:06.424263 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:34:06.424273 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:34:06.424284 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:34:06.424294 | orchestrator | 2026-03-10 00:34:06.424305 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-10 00:34:06.424316 | orchestrator | Tuesday 10 March 2026 00:33:54 +0000 (0:00:00.863) 0:07:46.657 ********* 2026-03-10 00:34:06.424333 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-10 00:34:06.424346 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-10 00:34:06.424366 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-10 00:34:06.424377 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-10 00:34:06.424387 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-10 00:34:06.424398 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-10 00:34:06.424408 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-10 00:34:06.424419 | orchestrator | 2026-03-10 00:34:06.424430 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-10 00:34:06.424440 | orchestrator | Tuesday 10 March 2026 00:33:56 +0000 (0:00:01.823) 0:07:48.481 ********* 2026-03-10 00:34:06.424451 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:34:06.424462 | orchestrator | 2026-03-10 00:34:06.424473 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-10 00:34:06.424484 | orchestrator | Tuesday 10 March 2026 00:33:56 +0000 (0:00:00.801) 0:07:49.283 ********* 2026-03-10 00:34:06.424494 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:34:06.424505 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:34:06.424516 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:34:06.424526 | orchestrator | changed: [testbed-manager] 2026-03-10 00:34:06.424537 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:34:06.424548 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:34:06.424558 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:34:06.424569 | orchestrator | 2026-03-10 00:34:06.424586 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-10 00:34:39.619270 | orchestrator | Tuesday 10 March 2026 00:34:06 +0000 (0:00:09.566) 0:07:58.849 ********* 2026-03-10 00:34:39.619390 | orchestrator | ok: [testbed-manager] 2026-03-10 00:34:39.619406 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:34:39.619417 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:34:39.619431 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:34:39.619450 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:34:39.619480 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:34:39.619499 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:34:39.619517 | orchestrator | 2026-03-10 00:34:39.619536 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-10 00:34:39.619555 | orchestrator | Tuesday 10 March 2026 00:34:08 +0000 (0:00:01.968) 0:08:00.818 ********* 2026-03-10 00:34:39.619573 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:34:39.619591 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:34:39.619611 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:34:39.619630 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:34:39.619648 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:34:39.619666 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:34:39.619686 | orchestrator | 2026-03-10 00:34:39.619740 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-10 00:34:39.619761 | orchestrator | Tuesday 10 March 2026 00:34:09 +0000 (0:00:01.311) 0:08:02.129 ********* 2026-03-10 00:34:39.619774 | orchestrator | changed: [testbed-manager] 2026-03-10 00:34:39.619787 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:34:39.619799 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:34:39.619812 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:34:39.619824 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:34:39.619862 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:34:39.619875 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:34:39.619887 | orchestrator | 2026-03-10 00:34:39.619899 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-10 00:34:39.619912 | orchestrator | 2026-03-10 00:34:39.619924 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-10 00:34:39.619936 | orchestrator | Tuesday 10 March 2026 00:34:10 +0000 (0:00:01.311) 0:08:03.441 ********* 2026-03-10 00:34:39.619948 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:34:39.619960 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:34:39.619971 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:34:39.619986 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:34:39.620005 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:34:39.620023 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:34:39.620041 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:34:39.620060 | orchestrator | 2026-03-10 00:34:39.620078 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-10 00:34:39.620096 | orchestrator | 2026-03-10 00:34:39.620118 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-10 00:34:39.620138 | orchestrator | Tuesday 10 March 2026 00:34:11 +0000 (0:00:00.923) 0:08:04.364 ********* 2026-03-10 00:34:39.620155 | orchestrator | changed: [testbed-manager] 2026-03-10 00:34:39.620168 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:34:39.620179 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:34:39.620189 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:34:39.620200 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:34:39.620210 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:34:39.620220 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:34:39.620231 | orchestrator | 2026-03-10 00:34:39.620241 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-10 00:34:39.620266 | orchestrator | Tuesday 10 March 2026 00:34:13 +0000 (0:00:01.372) 0:08:05.737 ********* 2026-03-10 00:34:39.620277 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:34:39.620288 | orchestrator | ok: [testbed-manager] 2026-03-10 00:34:39.620298 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:34:39.620309 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:34:39.620319 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:34:39.620330 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:34:39.620340 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:34:39.620350 | orchestrator | 2026-03-10 00:34:39.620361 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-10 00:34:39.620372 | orchestrator | Tuesday 10 March 2026 00:34:14 +0000 (0:00:01.444) 0:08:07.181 ********* 2026-03-10 00:34:39.620382 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:34:39.620393 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:34:39.620403 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:34:39.620413 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:34:39.620424 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:34:39.620434 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:34:39.620445 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:34:39.620455 | orchestrator | 2026-03-10 00:34:39.620466 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-10 00:34:39.620476 | orchestrator | Tuesday 10 March 2026 00:34:15 +0000 (0:00:00.532) 0:08:07.714 ********* 2026-03-10 00:34:39.620488 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:34:39.620501 | orchestrator | 2026-03-10 00:34:39.620511 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-10 00:34:39.620522 | orchestrator | Tuesday 10 March 2026 00:34:16 +0000 (0:00:01.031) 0:08:08.745 ********* 2026-03-10 00:34:39.620534 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:34:39.620556 | orchestrator | 2026-03-10 00:34:39.620567 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-10 00:34:39.620578 | orchestrator | Tuesday 10 March 2026 00:34:17 +0000 (0:00:00.825) 0:08:09.571 ********* 2026-03-10 00:34:39.620588 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:34:39.620599 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:34:39.620609 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:34:39.620620 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:34:39.620630 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:34:39.620641 | orchestrator | changed: [testbed-manager] 2026-03-10 00:34:39.620651 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:34:39.620662 | orchestrator | 2026-03-10 00:34:39.620720 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-10 00:34:39.620734 | orchestrator | Tuesday 10 March 2026 00:34:26 +0000 (0:00:09.324) 0:08:18.896 ********* 2026-03-10 00:34:39.620745 | orchestrator | changed: [testbed-manager] 2026-03-10 00:34:39.620755 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:34:39.620765 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:34:39.620776 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:34:39.620787 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:34:39.620797 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:34:39.620807 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:34:39.620818 | orchestrator | 2026-03-10 00:34:39.620828 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-10 00:34:39.620839 | orchestrator | Tuesday 10 March 2026 00:34:27 +0000 (0:00:01.162) 0:08:20.058 ********* 2026-03-10 00:34:39.620849 | orchestrator | changed: [testbed-manager] 2026-03-10 00:34:39.620860 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:34:39.620870 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:34:39.620880 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:34:39.620891 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:34:39.620901 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:34:39.620912 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:34:39.620922 | orchestrator | 2026-03-10 00:34:39.620933 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-10 00:34:39.620943 | orchestrator | Tuesday 10 March 2026 00:34:29 +0000 (0:00:01.465) 0:08:21.524 ********* 2026-03-10 00:34:39.620954 | orchestrator | changed: [testbed-manager] 2026-03-10 00:34:39.620964 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:34:39.620975 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:34:39.620985 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:34:39.620995 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:34:39.621006 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:34:39.621016 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:34:39.621026 | orchestrator | 2026-03-10 00:34:39.621037 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-10 00:34:39.621047 | orchestrator | Tuesday 10 March 2026 00:34:31 +0000 (0:00:02.113) 0:08:23.637 ********* 2026-03-10 00:34:39.621058 | orchestrator | changed: [testbed-manager] 2026-03-10 00:34:39.621068 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:34:39.621079 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:34:39.621089 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:34:39.621100 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:34:39.621110 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:34:39.621121 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:34:39.621132 | orchestrator | 2026-03-10 00:34:39.621142 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-10 00:34:39.621153 | orchestrator | Tuesday 10 March 2026 00:34:32 +0000 (0:00:01.261) 0:08:24.899 ********* 2026-03-10 00:34:39.621164 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:34:39.621174 | orchestrator | changed: [testbed-manager] 2026-03-10 00:34:39.621193 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:34:39.621203 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:34:39.621227 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:34:39.621238 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:34:39.621248 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:34:39.621259 | orchestrator | 2026-03-10 00:34:39.621270 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-10 00:34:39.621280 | orchestrator | 2026-03-10 00:34:39.621296 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-10 00:34:39.621307 | orchestrator | Tuesday 10 March 2026 00:34:34 +0000 (0:00:02.114) 0:08:27.013 ********* 2026-03-10 00:34:39.621318 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:34:39.621329 | orchestrator | 2026-03-10 00:34:39.621340 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-10 00:34:39.621351 | orchestrator | Tuesday 10 March 2026 00:34:35 +0000 (0:00:00.851) 0:08:27.865 ********* 2026-03-10 00:34:39.621362 | orchestrator | ok: [testbed-manager] 2026-03-10 00:34:39.621372 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:34:39.621383 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:34:39.621393 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:34:39.621404 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:34:39.621415 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:34:39.621425 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:34:39.621436 | orchestrator | 2026-03-10 00:34:39.621447 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-10 00:34:39.621457 | orchestrator | Tuesday 10 March 2026 00:34:36 +0000 (0:00:01.061) 0:08:28.927 ********* 2026-03-10 00:34:39.621468 | orchestrator | changed: [testbed-manager] 2026-03-10 00:34:39.621479 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:34:39.621490 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:34:39.621501 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:34:39.621511 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:34:39.621522 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:34:39.621532 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:34:39.621543 | orchestrator | 2026-03-10 00:34:39.621554 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-10 00:34:39.621564 | orchestrator | Tuesday 10 March 2026 00:34:37 +0000 (0:00:01.213) 0:08:30.141 ********* 2026-03-10 00:34:39.621575 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:34:39.621586 | orchestrator | 2026-03-10 00:34:39.621597 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-10 00:34:39.621607 | orchestrator | Tuesday 10 March 2026 00:34:38 +0000 (0:00:01.016) 0:08:31.157 ********* 2026-03-10 00:34:39.621618 | orchestrator | ok: [testbed-manager] 2026-03-10 00:34:39.621629 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:34:39.621639 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:34:39.621650 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:34:39.621660 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:34:39.621671 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:34:39.621681 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:34:39.621748 | orchestrator | 2026-03-10 00:34:39.621780 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-10 00:34:41.206940 | orchestrator | Tuesday 10 March 2026 00:34:39 +0000 (0:00:00.894) 0:08:32.052 ********* 2026-03-10 00:34:41.207052 | orchestrator | changed: [testbed-manager] 2026-03-10 00:34:41.207068 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:34:41.207079 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:34:41.207090 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:34:41.207101 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:34:41.207112 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:34:41.207122 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:34:41.207159 | orchestrator | 2026-03-10 00:34:41.207172 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:34:41.207184 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-10 00:34:41.207196 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-10 00:34:41.207207 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-10 00:34:41.207218 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-10 00:34:41.207229 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-03-10 00:34:41.207239 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-10 00:34:41.207250 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-10 00:34:41.207261 | orchestrator | 2026-03-10 00:34:41.207271 | orchestrator | 2026-03-10 00:34:41.207282 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:34:41.207293 | orchestrator | Tuesday 10 March 2026 00:34:40 +0000 (0:00:01.078) 0:08:33.130 ********* 2026-03-10 00:34:41.207304 | orchestrator | =============================================================================== 2026-03-10 00:34:41.207315 | orchestrator | osism.commons.packages : Install required packages --------------------- 85.99s 2026-03-10 00:34:41.207326 | orchestrator | osism.commons.packages : Download required packages -------------------- 41.19s 2026-03-10 00:34:41.207336 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.09s 2026-03-10 00:34:41.207347 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.54s 2026-03-10 00:34:41.207357 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.33s 2026-03-10 00:34:41.207384 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.15s 2026-03-10 00:34:41.207396 | orchestrator | osism.services.docker : Install docker package ------------------------- 12.00s 2026-03-10 00:34:41.207408 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 10.26s 2026-03-10 00:34:41.207419 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.87s 2026-03-10 00:34:41.207430 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.57s 2026-03-10 00:34:41.207440 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 9.42s 2026-03-10 00:34:41.207463 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.32s 2026-03-10 00:34:41.207476 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.35s 2026-03-10 00:34:41.207489 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.33s 2026-03-10 00:34:41.207501 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.23s 2026-03-10 00:34:41.207514 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.15s 2026-03-10 00:34:41.207526 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.71s 2026-03-10 00:34:41.207538 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.35s 2026-03-10 00:34:41.207551 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.70s 2026-03-10 00:34:41.207564 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.68s 2026-03-10 00:34:41.525892 | orchestrator | + osism apply fail2ban 2026-03-10 00:34:54.381498 | orchestrator | 2026-03-10 00:34:54 | INFO  | Task 7c694119-e374-4b6c-8b70-46182c20842d (fail2ban) was prepared for execution. 2026-03-10 00:34:54.381641 | orchestrator | 2026-03-10 00:34:54 | INFO  | It takes a moment until task 7c694119-e374-4b6c-8b70-46182c20842d (fail2ban) has been started and output is visible here. 2026-03-10 00:35:17.105000 | orchestrator | 2026-03-10 00:35:17.105146 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-10 00:35:17.105177 | orchestrator | 2026-03-10 00:35:17.105199 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-10 00:35:17.105221 | orchestrator | Tuesday 10 March 2026 00:34:59 +0000 (0:00:00.300) 0:00:00.300 ********* 2026-03-10 00:35:17.105244 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:35:17.105268 | orchestrator | 2026-03-10 00:35:17.105288 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-10 00:35:17.105307 | orchestrator | Tuesday 10 March 2026 00:35:00 +0000 (0:00:01.205) 0:00:01.506 ********* 2026-03-10 00:35:17.105328 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:35:17.105352 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:35:17.105373 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:35:17.105394 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:35:17.105410 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:35:17.105421 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:35:17.105432 | orchestrator | changed: [testbed-manager] 2026-03-10 00:35:17.105444 | orchestrator | 2026-03-10 00:35:17.105455 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-10 00:35:17.105466 | orchestrator | Tuesday 10 March 2026 00:35:12 +0000 (0:00:11.799) 0:00:13.305 ********* 2026-03-10 00:35:17.105477 | orchestrator | changed: [testbed-manager] 2026-03-10 00:35:17.105488 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:35:17.105498 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:35:17.105509 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:35:17.105519 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:35:17.105530 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:35:17.105541 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:35:17.105551 | orchestrator | 2026-03-10 00:35:17.105562 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-10 00:35:17.105572 | orchestrator | Tuesday 10 March 2026 00:35:13 +0000 (0:00:01.481) 0:00:14.786 ********* 2026-03-10 00:35:17.105583 | orchestrator | ok: [testbed-manager] 2026-03-10 00:35:17.105595 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:35:17.105606 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:35:17.105616 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:35:17.105627 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:35:17.105637 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:35:17.105648 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:35:17.105659 | orchestrator | 2026-03-10 00:35:17.105669 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-10 00:35:17.105737 | orchestrator | Tuesday 10 March 2026 00:35:15 +0000 (0:00:01.496) 0:00:16.282 ********* 2026-03-10 00:35:17.105752 | orchestrator | changed: [testbed-manager] 2026-03-10 00:35:17.105762 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:35:17.105773 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:35:17.105784 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:35:17.105795 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:35:17.105805 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:35:17.105816 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:35:17.105826 | orchestrator | 2026-03-10 00:35:17.105837 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:35:17.105848 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:35:17.105889 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:35:17.105901 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:35:17.105912 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:35:17.105923 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:35:17.105933 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:35:17.105944 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:35:17.105955 | orchestrator | 2026-03-10 00:35:17.105965 | orchestrator | 2026-03-10 00:35:17.105976 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:35:17.105987 | orchestrator | Tuesday 10 March 2026 00:35:16 +0000 (0:00:01.641) 0:00:17.924 ********* 2026-03-10 00:35:17.105998 | orchestrator | =============================================================================== 2026-03-10 00:35:17.106008 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.80s 2026-03-10 00:35:17.106373 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.64s 2026-03-10 00:35:17.106395 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.50s 2026-03-10 00:35:17.106413 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.48s 2026-03-10 00:35:17.106430 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.21s 2026-03-10 00:35:17.460101 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-03-10 00:35:17.460206 | orchestrator | + osism apply network 2026-03-10 00:35:29.716138 | orchestrator | 2026-03-10 00:35:29 | INFO  | Task f92953f6-c1a1-4342-ba5f-5f4b7da22d31 (network) was prepared for execution. 2026-03-10 00:35:29.716282 | orchestrator | 2026-03-10 00:35:29 | INFO  | It takes a moment until task f92953f6-c1a1-4342-ba5f-5f4b7da22d31 (network) has been started and output is visible here. 2026-03-10 00:35:59.036625 | orchestrator | 2026-03-10 00:35:59.036810 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-10 00:35:59.036833 | orchestrator | 2026-03-10 00:35:59.036845 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-10 00:35:59.036856 | orchestrator | Tuesday 10 March 2026 00:35:34 +0000 (0:00:00.280) 0:00:00.280 ********* 2026-03-10 00:35:59.036867 | orchestrator | ok: [testbed-manager] 2026-03-10 00:35:59.036880 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:35:59.036891 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:35:59.036902 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:35:59.036913 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:35:59.036923 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:35:59.036934 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:35:59.036944 | orchestrator | 2026-03-10 00:35:59.036955 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-10 00:35:59.036968 | orchestrator | Tuesday 10 March 2026 00:35:34 +0000 (0:00:00.732) 0:00:01.013 ********* 2026-03-10 00:35:59.036989 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:35:59.037011 | orchestrator | 2026-03-10 00:35:59.037030 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-10 00:35:59.037081 | orchestrator | Tuesday 10 March 2026 00:35:36 +0000 (0:00:01.245) 0:00:02.259 ********* 2026-03-10 00:35:59.037094 | orchestrator | ok: [testbed-manager] 2026-03-10 00:35:59.037105 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:35:59.037115 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:35:59.037126 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:35:59.037136 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:35:59.037147 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:35:59.037158 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:35:59.037171 | orchestrator | 2026-03-10 00:35:59.037183 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-10 00:35:59.037196 | orchestrator | Tuesday 10 March 2026 00:35:38 +0000 (0:00:02.006) 0:00:04.265 ********* 2026-03-10 00:35:59.037208 | orchestrator | ok: [testbed-manager] 2026-03-10 00:35:59.037220 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:35:59.037233 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:35:59.037246 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:35:59.037258 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:35:59.037269 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:35:59.037280 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:35:59.037290 | orchestrator | 2026-03-10 00:35:59.037301 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-10 00:35:59.037312 | orchestrator | Tuesday 10 March 2026 00:35:39 +0000 (0:00:01.919) 0:00:06.185 ********* 2026-03-10 00:35:59.037322 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-10 00:35:59.037333 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-10 00:35:59.037344 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-10 00:35:59.037355 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-10 00:35:59.037365 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-10 00:35:59.037376 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-10 00:35:59.037386 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-10 00:35:59.037406 | orchestrator | 2026-03-10 00:35:59.037447 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-10 00:35:59.037472 | orchestrator | Tuesday 10 March 2026 00:35:40 +0000 (0:00:00.963) 0:00:07.149 ********* 2026-03-10 00:35:59.037484 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-10 00:35:59.037495 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-10 00:35:59.037506 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-10 00:35:59.037516 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-10 00:35:59.037527 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-10 00:35:59.037537 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-10 00:35:59.037551 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-10 00:35:59.037570 | orchestrator | 2026-03-10 00:35:59.037588 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-10 00:35:59.037606 | orchestrator | Tuesday 10 March 2026 00:35:44 +0000 (0:00:03.420) 0:00:10.569 ********* 2026-03-10 00:35:59.037625 | orchestrator | changed: [testbed-manager] 2026-03-10 00:35:59.037643 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:35:59.037689 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:35:59.037708 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:35:59.037726 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:35:59.037744 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:35:59.037761 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:35:59.037780 | orchestrator | 2026-03-10 00:35:59.037798 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-10 00:35:59.037816 | orchestrator | Tuesday 10 March 2026 00:35:45 +0000 (0:00:01.595) 0:00:12.165 ********* 2026-03-10 00:35:59.037836 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-10 00:35:59.037855 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-10 00:35:59.037872 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-10 00:35:59.037890 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-10 00:35:59.037922 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-10 00:35:59.037942 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-10 00:35:59.037960 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-10 00:35:59.037979 | orchestrator | 2026-03-10 00:35:59.037998 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-10 00:35:59.038086 | orchestrator | Tuesday 10 March 2026 00:35:47 +0000 (0:00:01.738) 0:00:13.904 ********* 2026-03-10 00:35:59.038102 | orchestrator | ok: [testbed-manager] 2026-03-10 00:35:59.038113 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:35:59.038132 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:35:59.038152 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:35:59.038172 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:35:59.038192 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:35:59.038211 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:35:59.038232 | orchestrator | 2026-03-10 00:35:59.038253 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-10 00:35:59.038298 | orchestrator | Tuesday 10 March 2026 00:35:48 +0000 (0:00:01.249) 0:00:15.154 ********* 2026-03-10 00:35:59.038311 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:35:59.038321 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:35:59.038332 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:35:59.038343 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:35:59.038353 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:35:59.038364 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:35:59.038374 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:35:59.038385 | orchestrator | 2026-03-10 00:35:59.038396 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-10 00:35:59.038409 | orchestrator | Tuesday 10 March 2026 00:35:49 +0000 (0:00:00.714) 0:00:15.868 ********* 2026-03-10 00:35:59.038428 | orchestrator | ok: [testbed-manager] 2026-03-10 00:35:59.038447 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:35:59.038466 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:35:59.038481 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:35:59.038491 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:35:59.038502 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:35:59.038512 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:35:59.038523 | orchestrator | 2026-03-10 00:35:59.038533 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-10 00:35:59.038544 | orchestrator | Tuesday 10 March 2026 00:35:51 +0000 (0:00:02.306) 0:00:18.175 ********* 2026-03-10 00:35:59.038555 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:35:59.038566 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:35:59.038576 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:35:59.038587 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:35:59.038597 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:35:59.038608 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:35:59.038620 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-03-10 00:35:59.038632 | orchestrator | 2026-03-10 00:35:59.038643 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-10 00:35:59.038684 | orchestrator | Tuesday 10 March 2026 00:35:52 +0000 (0:00:00.945) 0:00:19.120 ********* 2026-03-10 00:35:59.038696 | orchestrator | ok: [testbed-manager] 2026-03-10 00:35:59.038712 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:35:59.038729 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:35:59.038745 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:35:59.038762 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:35:59.038779 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:35:59.038796 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:35:59.038814 | orchestrator | 2026-03-10 00:35:59.038832 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-10 00:35:59.038850 | orchestrator | Tuesday 10 March 2026 00:35:54 +0000 (0:00:01.670) 0:00:20.790 ********* 2026-03-10 00:35:59.038869 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:35:59.038902 | orchestrator | 2026-03-10 00:35:59.038922 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-10 00:35:59.038942 | orchestrator | Tuesday 10 March 2026 00:35:55 +0000 (0:00:01.283) 0:00:22.074 ********* 2026-03-10 00:35:59.038961 | orchestrator | ok: [testbed-manager] 2026-03-10 00:35:59.038981 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:35:59.039001 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:35:59.039020 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:35:59.039048 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:35:59.039065 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:35:59.039084 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:35:59.039102 | orchestrator | 2026-03-10 00:35:59.039120 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-10 00:35:59.039137 | orchestrator | Tuesday 10 March 2026 00:35:57 +0000 (0:00:01.142) 0:00:23.216 ********* 2026-03-10 00:35:59.039155 | orchestrator | ok: [testbed-manager] 2026-03-10 00:35:59.039173 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:35:59.039191 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:35:59.039210 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:35:59.039228 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:35:59.039244 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:35:59.039261 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:35:59.039279 | orchestrator | 2026-03-10 00:35:59.039299 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-10 00:35:59.039317 | orchestrator | Tuesday 10 March 2026 00:35:57 +0000 (0:00:00.695) 0:00:23.912 ********* 2026-03-10 00:35:59.039336 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-10 00:35:59.039355 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-10 00:35:59.039374 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-10 00:35:59.039393 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-10 00:35:59.039411 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-10 00:35:59.039427 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-10 00:35:59.039438 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-10 00:35:59.039448 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-10 00:35:59.039459 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-10 00:35:59.039470 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-10 00:35:59.039480 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-10 00:35:59.039491 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-10 00:35:59.039501 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-10 00:35:59.039512 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-10 00:35:59.039523 | orchestrator | 2026-03-10 00:35:59.039548 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-10 00:36:16.182472 | orchestrator | Tuesday 10 March 2026 00:35:59 +0000 (0:00:01.305) 0:00:25.217 ********* 2026-03-10 00:36:16.182721 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:36:16.182770 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:36:16.182791 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:36:16.182811 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:36:16.182830 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:36:16.182848 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:36:16.182865 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:36:16.182877 | orchestrator | 2026-03-10 00:36:16.182934 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-10 00:36:16.182957 | orchestrator | Tuesday 10 March 2026 00:35:59 +0000 (0:00:00.679) 0:00:25.897 ********* 2026-03-10 00:36:16.182971 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-manager, testbed-node-2, testbed-node-1, testbed-node-3, testbed-node-5, testbed-node-4 2026-03-10 00:36:16.182988 | orchestrator | 2026-03-10 00:36:16.183001 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-10 00:36:16.183014 | orchestrator | Tuesday 10 March 2026 00:36:04 +0000 (0:00:04.623) 0:00:30.520 ********* 2026-03-10 00:36:16.183028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-10 00:36:16.183042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-10 00:36:16.183055 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-10 00:36:16.183082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-10 00:36:16.183095 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-10 00:36:16.183118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-10 00:36:16.183130 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-10 00:36:16.183143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-10 00:36:16.183156 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-10 00:36:16.183169 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-10 00:36:16.183182 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-10 00:36:16.183217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-10 00:36:16.183240 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-10 00:36:16.183252 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-10 00:36:16.183265 | orchestrator | 2026-03-10 00:36:16.183279 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-10 00:36:16.183291 | orchestrator | Tuesday 10 March 2026 00:36:10 +0000 (0:00:05.914) 0:00:36.435 ********* 2026-03-10 00:36:16.183305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-10 00:36:16.183316 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-10 00:36:16.183327 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-10 00:36:16.183338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-10 00:36:16.183349 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-10 00:36:16.183365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-10 00:36:16.183376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-10 00:36:16.183388 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-10 00:36:16.183398 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-10 00:36:16.183410 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-10 00:36:16.183420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-10 00:36:16.183438 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-10 00:36:16.183457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-10 00:36:22.867862 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-10 00:36:22.867979 | orchestrator | 2026-03-10 00:36:22.867997 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-10 00:36:22.868010 | orchestrator | Tuesday 10 March 2026 00:36:16 +0000 (0:00:05.927) 0:00:42.363 ********* 2026-03-10 00:36:22.868023 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:36:22.868034 | orchestrator | 2026-03-10 00:36:22.868046 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-10 00:36:22.868057 | orchestrator | Tuesday 10 March 2026 00:36:17 +0000 (0:00:01.319) 0:00:43.683 ********* 2026-03-10 00:36:22.868067 | orchestrator | ok: [testbed-manager] 2026-03-10 00:36:22.868079 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:36:22.868090 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:36:22.868100 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:36:22.868111 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:36:22.868121 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:36:22.868132 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:36:22.868142 | orchestrator | 2026-03-10 00:36:22.868154 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-10 00:36:22.868174 | orchestrator | Tuesday 10 March 2026 00:36:18 +0000 (0:00:01.198) 0:00:44.881 ********* 2026-03-10 00:36:22.868186 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-10 00:36:22.868197 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-10 00:36:22.868208 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-10 00:36:22.868219 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-10 00:36:22.868229 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:36:22.868241 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-10 00:36:22.868252 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-10 00:36:22.868262 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-10 00:36:22.868273 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-10 00:36:22.868284 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:36:22.868294 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-10 00:36:22.868322 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-10 00:36:22.868334 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-10 00:36:22.868346 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-10 00:36:22.868383 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:36:22.868398 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-10 00:36:22.868411 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-10 00:36:22.868424 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-10 00:36:22.868437 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-10 00:36:22.868451 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:36:22.868464 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-10 00:36:22.868476 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-10 00:36:22.868489 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-10 00:36:22.868501 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-10 00:36:22.868514 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:36:22.868526 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-10 00:36:22.868538 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-10 00:36:22.868550 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-10 00:36:22.868562 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-10 00:36:22.868575 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:36:22.868587 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-10 00:36:22.868599 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-10 00:36:22.868611 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-10 00:36:22.868624 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-10 00:36:22.868687 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:36:22.868700 | orchestrator | 2026-03-10 00:36:22.868713 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-10 00:36:22.868743 | orchestrator | Tuesday 10 March 2026 00:36:21 +0000 (0:00:02.323) 0:00:47.205 ********* 2026-03-10 00:36:22.868755 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:36:22.868766 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:36:22.868776 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:36:22.868787 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:36:22.868798 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:36:22.868809 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:36:22.868819 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:36:22.868830 | orchestrator | 2026-03-10 00:36:22.868841 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-10 00:36:22.868852 | orchestrator | Tuesday 10 March 2026 00:36:21 +0000 (0:00:00.676) 0:00:47.881 ********* 2026-03-10 00:36:22.868863 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:36:22.868874 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:36:22.868884 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:36:22.868896 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:36:22.868907 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:36:22.868918 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:36:22.868928 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:36:22.868939 | orchestrator | 2026-03-10 00:36:22.868950 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:36:22.868962 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-10 00:36:22.868975 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-10 00:36:22.868995 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-10 00:36:22.869006 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-10 00:36:22.869017 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-10 00:36:22.869028 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-10 00:36:22.869039 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-10 00:36:22.869050 | orchestrator | 2026-03-10 00:36:22.869061 | orchestrator | 2026-03-10 00:36:22.869072 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:36:22.869083 | orchestrator | Tuesday 10 March 2026 00:36:22 +0000 (0:00:00.745) 0:00:48.626 ********* 2026-03-10 00:36:22.869100 | orchestrator | =============================================================================== 2026-03-10 00:36:22.869112 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.93s 2026-03-10 00:36:22.869122 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.91s 2026-03-10 00:36:22.869133 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.62s 2026-03-10 00:36:22.869144 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.42s 2026-03-10 00:36:22.869155 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.32s 2026-03-10 00:36:22.869165 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.31s 2026-03-10 00:36:22.869176 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.01s 2026-03-10 00:36:22.869187 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.92s 2026-03-10 00:36:22.869198 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.74s 2026-03-10 00:36:22.869209 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.67s 2026-03-10 00:36:22.869219 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.60s 2026-03-10 00:36:22.869230 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.32s 2026-03-10 00:36:22.869241 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.31s 2026-03-10 00:36:22.869252 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.28s 2026-03-10 00:36:22.869263 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.25s 2026-03-10 00:36:22.869274 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.25s 2026-03-10 00:36:22.869284 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.20s 2026-03-10 00:36:22.869295 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.14s 2026-03-10 00:36:22.869306 | orchestrator | osism.commons.network : Create required directories --------------------- 0.96s 2026-03-10 00:36:22.869317 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.95s 2026-03-10 00:36:23.200778 | orchestrator | + osism apply wireguard 2026-03-10 00:36:35.373367 | orchestrator | 2026-03-10 00:36:35 | INFO  | Task 1172eecd-e900-40ed-b31f-18ed745b922e (wireguard) was prepared for execution. 2026-03-10 00:36:35.373505 | orchestrator | 2026-03-10 00:36:35 | INFO  | It takes a moment until task 1172eecd-e900-40ed-b31f-18ed745b922e (wireguard) has been started and output is visible here. 2026-03-10 00:36:55.633282 | orchestrator | 2026-03-10 00:36:55.633396 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-10 00:36:55.633436 | orchestrator | 2026-03-10 00:36:55.633448 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-10 00:36:55.633458 | orchestrator | Tuesday 10 March 2026 00:36:39 +0000 (0:00:00.232) 0:00:00.232 ********* 2026-03-10 00:36:55.633468 | orchestrator | ok: [testbed-manager] 2026-03-10 00:36:55.633478 | orchestrator | 2026-03-10 00:36:55.633488 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-10 00:36:55.633498 | orchestrator | Tuesday 10 March 2026 00:36:41 +0000 (0:00:01.599) 0:00:01.831 ********* 2026-03-10 00:36:55.633507 | orchestrator | changed: [testbed-manager] 2026-03-10 00:36:55.633522 | orchestrator | 2026-03-10 00:36:55.633532 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-10 00:36:55.633542 | orchestrator | Tuesday 10 March 2026 00:36:47 +0000 (0:00:06.635) 0:00:08.467 ********* 2026-03-10 00:36:55.633551 | orchestrator | changed: [testbed-manager] 2026-03-10 00:36:55.633561 | orchestrator | 2026-03-10 00:36:55.633570 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-10 00:36:55.633580 | orchestrator | Tuesday 10 March 2026 00:36:48 +0000 (0:00:00.581) 0:00:09.049 ********* 2026-03-10 00:36:55.633589 | orchestrator | changed: [testbed-manager] 2026-03-10 00:36:55.633599 | orchestrator | 2026-03-10 00:36:55.633655 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-10 00:36:55.633665 | orchestrator | Tuesday 10 March 2026 00:36:48 +0000 (0:00:00.413) 0:00:09.462 ********* 2026-03-10 00:36:55.633674 | orchestrator | ok: [testbed-manager] 2026-03-10 00:36:55.633684 | orchestrator | 2026-03-10 00:36:55.633693 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-10 00:36:55.633703 | orchestrator | Tuesday 10 March 2026 00:36:49 +0000 (0:00:00.696) 0:00:10.159 ********* 2026-03-10 00:36:55.633712 | orchestrator | ok: [testbed-manager] 2026-03-10 00:36:55.633721 | orchestrator | 2026-03-10 00:36:55.633731 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-10 00:36:55.633740 | orchestrator | Tuesday 10 March 2026 00:36:50 +0000 (0:00:00.390) 0:00:10.550 ********* 2026-03-10 00:36:55.633749 | orchestrator | ok: [testbed-manager] 2026-03-10 00:36:55.633759 | orchestrator | 2026-03-10 00:36:55.633768 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-10 00:36:55.633778 | orchestrator | Tuesday 10 March 2026 00:36:50 +0000 (0:00:00.430) 0:00:10.981 ********* 2026-03-10 00:36:55.633787 | orchestrator | changed: [testbed-manager] 2026-03-10 00:36:55.633796 | orchestrator | 2026-03-10 00:36:55.633806 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-10 00:36:55.633816 | orchestrator | Tuesday 10 March 2026 00:36:51 +0000 (0:00:01.213) 0:00:12.194 ********* 2026-03-10 00:36:55.633825 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-10 00:36:55.633837 | orchestrator | changed: [testbed-manager] 2026-03-10 00:36:55.633848 | orchestrator | 2026-03-10 00:36:55.633859 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-10 00:36:55.633869 | orchestrator | Tuesday 10 March 2026 00:36:52 +0000 (0:00:00.961) 0:00:13.156 ********* 2026-03-10 00:36:55.633881 | orchestrator | changed: [testbed-manager] 2026-03-10 00:36:55.633892 | orchestrator | 2026-03-10 00:36:55.633903 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-10 00:36:55.633915 | orchestrator | Tuesday 10 March 2026 00:36:54 +0000 (0:00:01.681) 0:00:14.837 ********* 2026-03-10 00:36:55.633926 | orchestrator | changed: [testbed-manager] 2026-03-10 00:36:55.633937 | orchestrator | 2026-03-10 00:36:55.633947 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:36:55.633958 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:36:55.633977 | orchestrator | 2026-03-10 00:36:55.633995 | orchestrator | 2026-03-10 00:36:55.634012 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:36:55.634129 | orchestrator | Tuesday 10 March 2026 00:36:55 +0000 (0:00:00.879) 0:00:15.717 ********* 2026-03-10 00:36:55.634144 | orchestrator | =============================================================================== 2026-03-10 00:36:55.634154 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.64s 2026-03-10 00:36:55.634165 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.68s 2026-03-10 00:36:55.634177 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.60s 2026-03-10 00:36:55.634187 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.21s 2026-03-10 00:36:55.634198 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.96s 2026-03-10 00:36:55.634208 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.88s 2026-03-10 00:36:55.634217 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.70s 2026-03-10 00:36:55.634227 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.58s 2026-03-10 00:36:55.634237 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.43s 2026-03-10 00:36:55.634246 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.41s 2026-03-10 00:36:55.634256 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.39s 2026-03-10 00:36:55.937492 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-10 00:36:55.969931 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-10 00:36:55.970062 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-10 00:36:56.052873 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 168 0 --:--:-- --:--:-- --:--:-- 168 2026-03-10 00:36:56.068695 | orchestrator | + osism apply --environment custom workarounds 2026-03-10 00:36:57.997117 | orchestrator | 2026-03-10 00:36:57 | INFO  | Trying to run play workarounds in environment custom 2026-03-10 00:37:08.177365 | orchestrator | 2026-03-10 00:37:08 | INFO  | Task 779b5173-01e2-4827-a75d-469b66f59999 (workarounds) was prepared for execution. 2026-03-10 00:37:08.177471 | orchestrator | 2026-03-10 00:37:08 | INFO  | It takes a moment until task 779b5173-01e2-4827-a75d-469b66f59999 (workarounds) has been started and output is visible here. 2026-03-10 00:37:33.721662 | orchestrator | 2026-03-10 00:37:33.721796 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 00:37:33.721812 | orchestrator | 2026-03-10 00:37:33.721825 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-10 00:37:33.721844 | orchestrator | Tuesday 10 March 2026 00:37:12 +0000 (0:00:00.134) 0:00:00.134 ********* 2026-03-10 00:37:33.721869 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-10 00:37:33.721896 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-10 00:37:33.721915 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-10 00:37:33.721935 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-10 00:37:33.721954 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-10 00:37:33.721973 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-10 00:37:33.721993 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-10 00:37:33.722067 | orchestrator | 2026-03-10 00:37:33.722094 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-10 00:37:33.722112 | orchestrator | 2026-03-10 00:37:33.722132 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-10 00:37:33.722152 | orchestrator | Tuesday 10 March 2026 00:37:13 +0000 (0:00:00.785) 0:00:00.919 ********* 2026-03-10 00:37:33.722204 | orchestrator | ok: [testbed-manager] 2026-03-10 00:37:33.722228 | orchestrator | 2026-03-10 00:37:33.722249 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-10 00:37:33.722269 | orchestrator | 2026-03-10 00:37:33.722289 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-10 00:37:33.722310 | orchestrator | Tuesday 10 March 2026 00:37:15 +0000 (0:00:02.511) 0:00:03.431 ********* 2026-03-10 00:37:33.722331 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:37:33.722351 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:37:33.722371 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:37:33.722389 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:37:33.722408 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:37:33.722427 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:37:33.722448 | orchestrator | 2026-03-10 00:37:33.722468 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-10 00:37:33.722489 | orchestrator | 2026-03-10 00:37:33.722526 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-10 00:37:33.722545 | orchestrator | Tuesday 10 March 2026 00:37:17 +0000 (0:00:01.936) 0:00:05.367 ********* 2026-03-10 00:37:33.722567 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-10 00:37:33.722629 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-10 00:37:33.722649 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-10 00:37:33.722668 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-10 00:37:33.722687 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-10 00:37:33.722706 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-10 00:37:33.722724 | orchestrator | 2026-03-10 00:37:33.722742 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-10 00:37:33.722760 | orchestrator | Tuesday 10 March 2026 00:37:19 +0000 (0:00:01.550) 0:00:06.917 ********* 2026-03-10 00:37:33.722778 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:37:33.722798 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:37:33.722815 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:37:33.722832 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:37:33.722850 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:37:33.722868 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:37:33.722887 | orchestrator | 2026-03-10 00:37:33.722907 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-10 00:37:33.722926 | orchestrator | Tuesday 10 March 2026 00:37:22 +0000 (0:00:03.828) 0:00:10.746 ********* 2026-03-10 00:37:33.722944 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:37:33.722964 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:37:33.722983 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:37:33.723003 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:37:33.723020 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:37:33.723039 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:37:33.723057 | orchestrator | 2026-03-10 00:37:33.723075 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-10 00:37:33.723095 | orchestrator | 2026-03-10 00:37:33.723113 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-10 00:37:33.723132 | orchestrator | Tuesday 10 March 2026 00:37:23 +0000 (0:00:00.702) 0:00:11.448 ********* 2026-03-10 00:37:33.723151 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:37:33.723169 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:37:33.723188 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:37:33.723206 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:37:33.723226 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:37:33.723259 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:37:33.723277 | orchestrator | changed: [testbed-manager] 2026-03-10 00:37:33.723295 | orchestrator | 2026-03-10 00:37:33.723315 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-10 00:37:33.723333 | orchestrator | Tuesday 10 March 2026 00:37:25 +0000 (0:00:01.522) 0:00:12.971 ********* 2026-03-10 00:37:33.723352 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:37:33.723363 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:37:33.723373 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:37:33.723384 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:37:33.723395 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:37:33.723405 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:37:33.723437 | orchestrator | changed: [testbed-manager] 2026-03-10 00:37:33.723448 | orchestrator | 2026-03-10 00:37:33.723459 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-10 00:37:33.723470 | orchestrator | Tuesday 10 March 2026 00:37:26 +0000 (0:00:01.764) 0:00:14.735 ********* 2026-03-10 00:37:33.723480 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:37:33.723491 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:37:33.723501 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:37:33.723511 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:37:33.723522 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:37:33.723532 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:37:33.723542 | orchestrator | ok: [testbed-manager] 2026-03-10 00:37:33.723553 | orchestrator | 2026-03-10 00:37:33.723563 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-10 00:37:33.723596 | orchestrator | Tuesday 10 March 2026 00:37:28 +0000 (0:00:01.615) 0:00:16.351 ********* 2026-03-10 00:37:33.723607 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:37:33.723618 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:37:33.723628 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:37:33.723639 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:37:33.723649 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:37:33.723660 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:37:33.723670 | orchestrator | changed: [testbed-manager] 2026-03-10 00:37:33.723680 | orchestrator | 2026-03-10 00:37:33.723691 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-10 00:37:33.723702 | orchestrator | Tuesday 10 March 2026 00:37:30 +0000 (0:00:01.866) 0:00:18.217 ********* 2026-03-10 00:37:33.723712 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:37:33.723723 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:37:33.723733 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:37:33.723744 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:37:33.723754 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:37:33.723765 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:37:33.723775 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:37:33.723785 | orchestrator | 2026-03-10 00:37:33.723796 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-10 00:37:33.723806 | orchestrator | 2026-03-10 00:37:33.723817 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-10 00:37:33.723827 | orchestrator | Tuesday 10 March 2026 00:37:30 +0000 (0:00:00.643) 0:00:18.861 ********* 2026-03-10 00:37:33.723838 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:37:33.723848 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:37:33.723859 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:37:33.723879 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:37:33.723897 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:37:33.723915 | orchestrator | ok: [testbed-manager] 2026-03-10 00:37:33.723933 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:37:33.723952 | orchestrator | 2026-03-10 00:37:33.723971 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:37:33.723990 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-10 00:37:33.724021 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:37:33.724033 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:37:33.724043 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:37:33.724058 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:37:33.724077 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:37:33.724095 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:37:33.724112 | orchestrator | 2026-03-10 00:37:33.724130 | orchestrator | 2026-03-10 00:37:33.724151 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:37:33.724169 | orchestrator | Tuesday 10 March 2026 00:37:33 +0000 (0:00:02.717) 0:00:21.579 ********* 2026-03-10 00:37:33.724187 | orchestrator | =============================================================================== 2026-03-10 00:37:33.724198 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.83s 2026-03-10 00:37:33.724209 | orchestrator | Install python3-docker -------------------------------------------------- 2.72s 2026-03-10 00:37:33.724219 | orchestrator | Apply netplan configuration --------------------------------------------- 2.51s 2026-03-10 00:37:33.724230 | orchestrator | Apply netplan configuration --------------------------------------------- 1.94s 2026-03-10 00:37:33.724240 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.87s 2026-03-10 00:37:33.724251 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.76s 2026-03-10 00:37:33.724261 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.62s 2026-03-10 00:37:33.724272 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.55s 2026-03-10 00:37:33.724282 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.52s 2026-03-10 00:37:33.724292 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.79s 2026-03-10 00:37:33.724303 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.70s 2026-03-10 00:37:33.724323 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.64s 2026-03-10 00:37:34.412493 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-10 00:37:46.501232 | orchestrator | 2026-03-10 00:37:46 | INFO  | Task 25f6e1bf-8dbd-4c92-98fa-9c90191165cb (reboot) was prepared for execution. 2026-03-10 00:37:46.501377 | orchestrator | 2026-03-10 00:37:46 | INFO  | It takes a moment until task 25f6e1bf-8dbd-4c92-98fa-9c90191165cb (reboot) has been started and output is visible here. 2026-03-10 00:37:56.936848 | orchestrator | 2026-03-10 00:37:56.936966 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-10 00:37:56.936983 | orchestrator | 2026-03-10 00:37:56.936996 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-10 00:37:56.937007 | orchestrator | Tuesday 10 March 2026 00:37:50 +0000 (0:00:00.213) 0:00:00.213 ********* 2026-03-10 00:37:56.937018 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:37:56.937030 | orchestrator | 2026-03-10 00:37:56.937041 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-10 00:37:56.937052 | orchestrator | Tuesday 10 March 2026 00:37:50 +0000 (0:00:00.115) 0:00:00.328 ********* 2026-03-10 00:37:56.937063 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:37:56.937116 | orchestrator | 2026-03-10 00:37:56.937128 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-10 00:37:56.937139 | orchestrator | Tuesday 10 March 2026 00:37:51 +0000 (0:00:00.957) 0:00:01.286 ********* 2026-03-10 00:37:56.937149 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:37:56.937160 | orchestrator | 2026-03-10 00:37:56.937171 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-10 00:37:56.937181 | orchestrator | 2026-03-10 00:37:56.937192 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-10 00:37:56.937202 | orchestrator | Tuesday 10 March 2026 00:37:52 +0000 (0:00:00.112) 0:00:01.398 ********* 2026-03-10 00:37:56.937213 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:37:56.937224 | orchestrator | 2026-03-10 00:37:56.937234 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-10 00:37:56.937245 | orchestrator | Tuesday 10 March 2026 00:37:52 +0000 (0:00:00.107) 0:00:01.506 ********* 2026-03-10 00:37:56.937255 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:37:56.937266 | orchestrator | 2026-03-10 00:37:56.937291 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-10 00:37:56.937303 | orchestrator | Tuesday 10 March 2026 00:37:52 +0000 (0:00:00.682) 0:00:02.188 ********* 2026-03-10 00:37:56.937313 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:37:56.937324 | orchestrator | 2026-03-10 00:37:56.937334 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-10 00:37:56.937345 | orchestrator | 2026-03-10 00:37:56.937356 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-10 00:37:56.937366 | orchestrator | Tuesday 10 March 2026 00:37:52 +0000 (0:00:00.110) 0:00:02.298 ********* 2026-03-10 00:37:56.937377 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:37:56.937393 | orchestrator | 2026-03-10 00:37:56.937414 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-10 00:37:56.937434 | orchestrator | Tuesday 10 March 2026 00:37:53 +0000 (0:00:00.217) 0:00:02.516 ********* 2026-03-10 00:37:56.937454 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:37:56.937473 | orchestrator | 2026-03-10 00:37:56.937493 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-10 00:37:56.937512 | orchestrator | Tuesday 10 March 2026 00:37:53 +0000 (0:00:00.679) 0:00:03.196 ********* 2026-03-10 00:37:56.937531 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:37:56.937550 | orchestrator | 2026-03-10 00:37:56.937641 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-10 00:37:56.937665 | orchestrator | 2026-03-10 00:37:56.937684 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-10 00:37:56.937703 | orchestrator | Tuesday 10 March 2026 00:37:53 +0000 (0:00:00.101) 0:00:03.297 ********* 2026-03-10 00:37:56.937723 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:37:56.937741 | orchestrator | 2026-03-10 00:37:56.937760 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-10 00:37:56.937777 | orchestrator | Tuesday 10 March 2026 00:37:54 +0000 (0:00:00.106) 0:00:03.404 ********* 2026-03-10 00:37:56.937795 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:37:56.937813 | orchestrator | 2026-03-10 00:37:56.937830 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-10 00:37:56.937847 | orchestrator | Tuesday 10 March 2026 00:37:54 +0000 (0:00:00.660) 0:00:04.065 ********* 2026-03-10 00:37:56.937865 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:37:56.937885 | orchestrator | 2026-03-10 00:37:56.937904 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-10 00:37:56.937923 | orchestrator | 2026-03-10 00:37:56.937941 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-10 00:37:56.937952 | orchestrator | Tuesday 10 March 2026 00:37:54 +0000 (0:00:00.131) 0:00:04.196 ********* 2026-03-10 00:37:56.937963 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:37:56.937973 | orchestrator | 2026-03-10 00:37:56.937984 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-10 00:37:56.938007 | orchestrator | Tuesday 10 March 2026 00:37:54 +0000 (0:00:00.085) 0:00:04.281 ********* 2026-03-10 00:37:56.938080 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:37:56.938093 | orchestrator | 2026-03-10 00:37:56.938104 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-10 00:37:56.938115 | orchestrator | Tuesday 10 March 2026 00:37:55 +0000 (0:00:00.659) 0:00:04.940 ********* 2026-03-10 00:37:56.938125 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:37:56.938137 | orchestrator | 2026-03-10 00:37:56.938148 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-10 00:37:56.938158 | orchestrator | 2026-03-10 00:37:56.938169 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-10 00:37:56.938180 | orchestrator | Tuesday 10 March 2026 00:37:55 +0000 (0:00:00.149) 0:00:05.089 ********* 2026-03-10 00:37:56.938191 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:37:56.938201 | orchestrator | 2026-03-10 00:37:56.938212 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-10 00:37:56.938223 | orchestrator | Tuesday 10 March 2026 00:37:55 +0000 (0:00:00.107) 0:00:05.197 ********* 2026-03-10 00:37:56.938234 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:37:56.938244 | orchestrator | 2026-03-10 00:37:56.938255 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-10 00:37:56.938266 | orchestrator | Tuesday 10 March 2026 00:37:56 +0000 (0:00:00.651) 0:00:05.849 ********* 2026-03-10 00:37:56.938298 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:37:56.938309 | orchestrator | 2026-03-10 00:37:56.938320 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:37:56.938332 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:37:56.938344 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:37:56.938355 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:37:56.938365 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:37:56.938376 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:37:56.938387 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:37:56.938398 | orchestrator | 2026-03-10 00:37:56.938408 | orchestrator | 2026-03-10 00:37:56.938419 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:37:56.938439 | orchestrator | Tuesday 10 March 2026 00:37:56 +0000 (0:00:00.036) 0:00:05.885 ********* 2026-03-10 00:37:56.938450 | orchestrator | =============================================================================== 2026-03-10 00:37:56.938460 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.29s 2026-03-10 00:37:56.938471 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.74s 2026-03-10 00:37:56.938481 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.64s 2026-03-10 00:37:57.298818 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-10 00:38:09.482093 | orchestrator | 2026-03-10 00:38:09 | INFO  | Task a0076062-ce59-40a9-8b99-27c0b2cf202a (wait-for-connection) was prepared for execution. 2026-03-10 00:38:09.482193 | orchestrator | 2026-03-10 00:38:09 | INFO  | It takes a moment until task a0076062-ce59-40a9-8b99-27c0b2cf202a (wait-for-connection) has been started and output is visible here. 2026-03-10 00:38:26.248695 | orchestrator | 2026-03-10 00:38:26.248836 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-10 00:38:26.248864 | orchestrator | 2026-03-10 00:38:26.248885 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-10 00:38:26.248898 | orchestrator | Tuesday 10 March 2026 00:38:14 +0000 (0:00:00.245) 0:00:00.246 ********* 2026-03-10 00:38:26.248909 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:38:26.248921 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:38:26.248932 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:38:26.248943 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:38:26.248954 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:38:26.248964 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:38:26.248975 | orchestrator | 2026-03-10 00:38:26.248986 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:38:26.248997 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:38:26.249010 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:38:26.249021 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:38:26.249031 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:38:26.249042 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:38:26.249053 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:38:26.249065 | orchestrator | 2026-03-10 00:38:26.249076 | orchestrator | 2026-03-10 00:38:26.249087 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:38:26.249098 | orchestrator | Tuesday 10 March 2026 00:38:25 +0000 (0:00:11.552) 0:00:11.798 ********* 2026-03-10 00:38:26.249108 | orchestrator | =============================================================================== 2026-03-10 00:38:26.249119 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.55s 2026-03-10 00:38:26.560038 | orchestrator | + osism apply hddtemp 2026-03-10 00:38:38.709974 | orchestrator | 2026-03-10 00:38:38 | INFO  | Task 4caf9311-7779-44f1-8083-6a6d7eba4c18 (hddtemp) was prepared for execution. 2026-03-10 00:38:38.710125 | orchestrator | 2026-03-10 00:38:38 | INFO  | It takes a moment until task 4caf9311-7779-44f1-8083-6a6d7eba4c18 (hddtemp) has been started and output is visible here. 2026-03-10 00:39:07.099644 | orchestrator | 2026-03-10 00:39:07.099802 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-10 00:39:07.099823 | orchestrator | 2026-03-10 00:39:07.099835 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-10 00:39:07.099846 | orchestrator | Tuesday 10 March 2026 00:38:42 +0000 (0:00:00.229) 0:00:00.229 ********* 2026-03-10 00:39:07.099856 | orchestrator | ok: [testbed-manager] 2026-03-10 00:39:07.099867 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:39:07.099878 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:39:07.099887 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:39:07.099898 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:39:07.099908 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:39:07.099917 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:39:07.099927 | orchestrator | 2026-03-10 00:39:07.099937 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-10 00:39:07.099947 | orchestrator | Tuesday 10 March 2026 00:38:43 +0000 (0:00:00.702) 0:00:00.931 ********* 2026-03-10 00:39:07.099959 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:39:07.099994 | orchestrator | 2026-03-10 00:39:07.100005 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-10 00:39:07.100016 | orchestrator | Tuesday 10 March 2026 00:38:44 +0000 (0:00:01.196) 0:00:02.128 ********* 2026-03-10 00:39:07.100025 | orchestrator | ok: [testbed-manager] 2026-03-10 00:39:07.100035 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:39:07.100044 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:39:07.100054 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:39:07.100064 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:39:07.100074 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:39:07.100083 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:39:07.100093 | orchestrator | 2026-03-10 00:39:07.100117 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-10 00:39:07.100129 | orchestrator | Tuesday 10 March 2026 00:38:46 +0000 (0:00:01.996) 0:00:04.125 ********* 2026-03-10 00:39:07.100140 | orchestrator | changed: [testbed-manager] 2026-03-10 00:39:07.100152 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:39:07.100163 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:39:07.100174 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:39:07.100185 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:39:07.100196 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:39:07.100207 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:39:07.100218 | orchestrator | 2026-03-10 00:39:07.100230 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-10 00:39:07.100242 | orchestrator | Tuesday 10 March 2026 00:38:47 +0000 (0:00:01.279) 0:00:05.404 ********* 2026-03-10 00:39:07.100253 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:39:07.100264 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:39:07.100275 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:39:07.100286 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:39:07.100297 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:39:07.100308 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:39:07.100318 | orchestrator | ok: [testbed-manager] 2026-03-10 00:39:07.100330 | orchestrator | 2026-03-10 00:39:07.100341 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-10 00:39:07.100351 | orchestrator | Tuesday 10 March 2026 00:38:49 +0000 (0:00:01.241) 0:00:06.646 ********* 2026-03-10 00:39:07.100362 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:39:07.100373 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:39:07.100384 | orchestrator | changed: [testbed-manager] 2026-03-10 00:39:07.100395 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:39:07.100406 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:39:07.100417 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:39:07.100428 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:39:07.100439 | orchestrator | 2026-03-10 00:39:07.100451 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-10 00:39:07.100463 | orchestrator | Tuesday 10 March 2026 00:38:49 +0000 (0:00:00.863) 0:00:07.510 ********* 2026-03-10 00:39:07.100474 | orchestrator | changed: [testbed-manager] 2026-03-10 00:39:07.100485 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:39:07.100494 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:39:07.100504 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:39:07.100513 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:39:07.100523 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:39:07.100559 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:39:07.100572 | orchestrator | 2026-03-10 00:39:07.100581 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-10 00:39:07.100590 | orchestrator | Tuesday 10 March 2026 00:39:03 +0000 (0:00:13.578) 0:00:21.088 ********* 2026-03-10 00:39:07.100601 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:39:07.100619 | orchestrator | 2026-03-10 00:39:07.100629 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-10 00:39:07.100638 | orchestrator | Tuesday 10 March 2026 00:39:04 +0000 (0:00:01.290) 0:00:22.379 ********* 2026-03-10 00:39:07.100648 | orchestrator | changed: [testbed-manager] 2026-03-10 00:39:07.100658 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:39:07.100667 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:39:07.100677 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:39:07.100686 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:39:07.100696 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:39:07.100705 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:39:07.100715 | orchestrator | 2026-03-10 00:39:07.100724 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:39:07.100734 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:39:07.100761 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-10 00:39:07.100773 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-10 00:39:07.100783 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-10 00:39:07.100792 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-10 00:39:07.100801 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-10 00:39:07.100811 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-10 00:39:07.100820 | orchestrator | 2026-03-10 00:39:07.100830 | orchestrator | 2026-03-10 00:39:07.100840 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:39:07.100849 | orchestrator | Tuesday 10 March 2026 00:39:06 +0000 (0:00:01.919) 0:00:24.298 ********* 2026-03-10 00:39:07.100859 | orchestrator | =============================================================================== 2026-03-10 00:39:07.100868 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.58s 2026-03-10 00:39:07.100878 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.00s 2026-03-10 00:39:07.100892 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.92s 2026-03-10 00:39:07.100902 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.29s 2026-03-10 00:39:07.100911 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.28s 2026-03-10 00:39:07.100921 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.24s 2026-03-10 00:39:07.100930 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.20s 2026-03-10 00:39:07.100939 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.86s 2026-03-10 00:39:07.100949 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.70s 2026-03-10 00:39:07.505105 | orchestrator | ++ semver 9.5.0 7.1.1 2026-03-10 00:39:07.550898 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-10 00:39:07.550993 | orchestrator | + sudo systemctl restart manager.service 2026-03-10 00:39:20.962346 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-10 00:39:20.962439 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-10 00:39:20.962449 | orchestrator | + local max_attempts=60 2026-03-10 00:39:20.962458 | orchestrator | + local name=ceph-ansible 2026-03-10 00:39:20.962465 | orchestrator | + local attempt_num=1 2026-03-10 00:39:20.962497 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-10 00:39:21.006818 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-10 00:39:21.006899 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-10 00:39:21.006908 | orchestrator | + sleep 5 2026-03-10 00:39:26.010783 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-10 00:39:26.048240 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-10 00:39:26.048315 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-10 00:39:26.048324 | orchestrator | + sleep 5 2026-03-10 00:39:31.051682 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-10 00:39:31.089047 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-10 00:39:31.089128 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-10 00:39:31.089140 | orchestrator | + sleep 5 2026-03-10 00:39:36.094623 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-10 00:39:36.129299 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-10 00:39:36.129396 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-10 00:39:36.129411 | orchestrator | + sleep 5 2026-03-10 00:39:41.133906 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-10 00:39:41.168552 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-10 00:39:41.168638 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-10 00:39:41.168659 | orchestrator | + sleep 5 2026-03-10 00:39:46.173964 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-10 00:39:46.214108 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-10 00:39:46.214201 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-10 00:39:46.214216 | orchestrator | + sleep 5 2026-03-10 00:39:51.218215 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-10 00:39:51.256077 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-10 00:39:51.256192 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-10 00:39:51.256208 | orchestrator | + sleep 5 2026-03-10 00:39:56.262223 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-10 00:39:56.309074 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-10 00:39:56.309175 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-10 00:39:56.309190 | orchestrator | + sleep 5 2026-03-10 00:40:01.311492 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-10 00:40:01.342195 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-10 00:40:01.342315 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-10 00:40:01.342345 | orchestrator | + sleep 5 2026-03-10 00:40:06.348351 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-10 00:40:06.389330 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-10 00:40:06.389417 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-10 00:40:06.389433 | orchestrator | + sleep 5 2026-03-10 00:40:11.394113 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-10 00:40:11.427867 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-10 00:40:11.427966 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-10 00:40:11.427992 | orchestrator | + sleep 5 2026-03-10 00:40:16.432812 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-10 00:40:16.471044 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-10 00:40:16.471133 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-10 00:40:16.471146 | orchestrator | + sleep 5 2026-03-10 00:40:21.476330 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-10 00:40:21.510145 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-10 00:40:21.510238 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-10 00:40:21.510253 | orchestrator | + sleep 5 2026-03-10 00:40:26.514543 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-10 00:40:26.551918 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-10 00:40:26.552018 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-10 00:40:26.552033 | orchestrator | + local max_attempts=60 2026-03-10 00:40:26.552044 | orchestrator | + local name=kolla-ansible 2026-03-10 00:40:26.552055 | orchestrator | + local attempt_num=1 2026-03-10 00:40:26.552361 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-10 00:40:26.593847 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-10 00:40:26.593970 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-10 00:40:26.593984 | orchestrator | + local max_attempts=60 2026-03-10 00:40:26.593996 | orchestrator | + local name=osism-ansible 2026-03-10 00:40:26.594007 | orchestrator | + local attempt_num=1 2026-03-10 00:40:26.594724 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-10 00:40:26.629386 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-10 00:40:26.629525 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-10 00:40:26.629542 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-10 00:40:26.803024 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-10 00:40:26.948422 | orchestrator | ARA in kolla-ansible already disabled. 2026-03-10 00:40:27.089104 | orchestrator | ARA in osism-ansible already disabled. 2026-03-10 00:40:27.233922 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-10 00:40:27.234452 | orchestrator | + osism apply gather-facts 2026-03-10 00:40:39.369188 | orchestrator | 2026-03-10 00:40:39 | INFO  | Task 2a034eda-0d4b-4170-a3b1-9e02c57dfd75 (gather-facts) was prepared for execution. 2026-03-10 00:40:39.369318 | orchestrator | 2026-03-10 00:40:39 | INFO  | It takes a moment until task 2a034eda-0d4b-4170-a3b1-9e02c57dfd75 (gather-facts) has been started and output is visible here. 2026-03-10 00:40:53.201795 | orchestrator | 2026-03-10 00:40:53.201927 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-10 00:40:53.201945 | orchestrator | 2026-03-10 00:40:53.201958 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-10 00:40:53.201970 | orchestrator | Tuesday 10 March 2026 00:40:43 +0000 (0:00:00.221) 0:00:00.221 ********* 2026-03-10 00:40:53.201982 | orchestrator | ok: [testbed-manager] 2026-03-10 00:40:53.201994 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:40:53.202005 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:40:53.202075 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:40:53.202089 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:40:53.202100 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:40:53.202111 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:40:53.202121 | orchestrator | 2026-03-10 00:40:53.202133 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-10 00:40:53.202144 | orchestrator | 2026-03-10 00:40:53.202154 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-10 00:40:53.202165 | orchestrator | Tuesday 10 March 2026 00:40:52 +0000 (0:00:08.560) 0:00:08.781 ********* 2026-03-10 00:40:53.202177 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:40:53.202189 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:40:53.202200 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:40:53.202210 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:40:53.202221 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:40:53.202231 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:40:53.202242 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:40:53.202252 | orchestrator | 2026-03-10 00:40:53.202263 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:40:53.202274 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-10 00:40:53.202287 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-10 00:40:53.202297 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-10 00:40:53.202308 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-10 00:40:53.202319 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-10 00:40:53.202330 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-10 00:40:53.202372 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-10 00:40:53.202386 | orchestrator | 2026-03-10 00:40:53.202399 | orchestrator | 2026-03-10 00:40:53.202411 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:40:53.202423 | orchestrator | Tuesday 10 March 2026 00:40:52 +0000 (0:00:00.530) 0:00:09.312 ********* 2026-03-10 00:40:53.202435 | orchestrator | =============================================================================== 2026-03-10 00:40:53.202447 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.56s 2026-03-10 00:40:53.202459 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2026-03-10 00:40:53.558475 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-10 00:40:53.570839 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-10 00:40:53.584541 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-10 00:40:53.599385 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-10 00:40:53.617108 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-10 00:40:53.630916 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-10 00:40:53.645453 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-10 00:40:53.656029 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-10 00:40:53.670329 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-10 00:40:53.680340 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-10 00:40:53.694312 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-10 00:40:53.708112 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-10 00:40:53.727943 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-10 00:40:53.746816 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-10 00:40:53.759587 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-10 00:40:53.775499 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-10 00:40:53.792105 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-10 00:40:53.805454 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-10 00:40:53.822641 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-10 00:40:53.838597 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-10 00:40:53.852262 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-10 00:40:53.870953 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-10 00:40:53.885391 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-10 00:40:53.898250 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-10 00:40:54.248559 | orchestrator | ok: Runtime: 0:25:10.562055 2026-03-10 00:40:54.363206 | 2026-03-10 00:40:54.363409 | TASK [Deploy services] 2026-03-10 00:40:54.903490 | orchestrator | skipping: Conditional result was False 2026-03-10 00:40:54.922230 | 2026-03-10 00:40:54.922461 | TASK [Deploy in a nutshell] 2026-03-10 00:40:55.624525 | orchestrator | + set -e 2026-03-10 00:40:55.625694 | orchestrator | 2026-03-10 00:40:55.625752 | orchestrator | # PULL IMAGES 2026-03-10 00:40:55.625777 | orchestrator | 2026-03-10 00:40:55.625808 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-10 00:40:55.625839 | orchestrator | ++ export INTERACTIVE=false 2026-03-10 00:40:55.625855 | orchestrator | ++ INTERACTIVE=false 2026-03-10 00:40:55.625900 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-10 00:40:55.625923 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-10 00:40:55.625938 | orchestrator | + source /opt/manager-vars.sh 2026-03-10 00:40:55.625950 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-10 00:40:55.625975 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-10 00:40:55.625994 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-10 00:40:55.626074 | orchestrator | ++ CEPH_VERSION=reef 2026-03-10 00:40:55.626091 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-10 00:40:55.626108 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-10 00:40:55.626120 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-10 00:40:55.626135 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-10 00:40:55.626147 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-10 00:40:55.626159 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-10 00:40:55.626170 | orchestrator | ++ export ARA=false 2026-03-10 00:40:55.626181 | orchestrator | ++ ARA=false 2026-03-10 00:40:55.626192 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-10 00:40:55.626203 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-10 00:40:55.626214 | orchestrator | ++ export TEMPEST=true 2026-03-10 00:40:55.626225 | orchestrator | ++ TEMPEST=true 2026-03-10 00:40:55.626236 | orchestrator | ++ export IS_ZUUL=true 2026-03-10 00:40:55.626247 | orchestrator | ++ IS_ZUUL=true 2026-03-10 00:40:55.626258 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.64 2026-03-10 00:40:55.626270 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.64 2026-03-10 00:40:55.626281 | orchestrator | ++ export EXTERNAL_API=false 2026-03-10 00:40:55.626316 | orchestrator | ++ EXTERNAL_API=false 2026-03-10 00:40:55.626327 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-10 00:40:55.626339 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-10 00:40:55.626350 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-10 00:40:55.626360 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-10 00:40:55.626371 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-10 00:40:55.626391 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-10 00:40:55.626403 | orchestrator | + echo 2026-03-10 00:40:55.626414 | orchestrator | + echo '# PULL IMAGES' 2026-03-10 00:40:55.626424 | orchestrator | + echo 2026-03-10 00:40:55.626445 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-10 00:40:55.670976 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-10 00:40:55.671087 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-10 00:40:57.644417 | orchestrator | 2026-03-10 00:40:57 | INFO  | Trying to run play pull-images in environment custom 2026-03-10 00:41:07.789616 | orchestrator | 2026-03-10 00:41:07 | INFO  | Task 1b81d31d-cbee-4b58-baeb-de83c8e027a7 (pull-images) was prepared for execution. 2026-03-10 00:41:07.789762 | orchestrator | 2026-03-10 00:41:07 | INFO  | Task 1b81d31d-cbee-4b58-baeb-de83c8e027a7 is running in background. No more output. Check ARA for logs. 2026-03-10 00:41:10.189249 | orchestrator | 2026-03-10 00:41:10 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-10 00:41:20.277095 | orchestrator | 2026-03-10 00:41:20 | INFO  | Task e6cd1dd6-90af-4011-84f3-d8d768462cb3 (wipe-partitions) was prepared for execution. 2026-03-10 00:41:20.277209 | orchestrator | 2026-03-10 00:41:20 | INFO  | It takes a moment until task e6cd1dd6-90af-4011-84f3-d8d768462cb3 (wipe-partitions) has been started and output is visible here. 2026-03-10 00:41:35.883780 | orchestrator | 2026-03-10 00:41:35.883885 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-10 00:41:35.883907 | orchestrator | 2026-03-10 00:41:35.883924 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-10 00:41:35.883945 | orchestrator | Tuesday 10 March 2026 00:41:25 +0000 (0:00:00.135) 0:00:00.135 ********* 2026-03-10 00:41:35.883962 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:41:35.883979 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:41:35.883996 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:41:35.884013 | orchestrator | 2026-03-10 00:41:35.884030 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-10 00:41:35.884072 | orchestrator | Tuesday 10 March 2026 00:41:26 +0000 (0:00:01.621) 0:00:01.757 ********* 2026-03-10 00:41:35.884089 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:41:35.884104 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:41:35.884120 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:41:35.884140 | orchestrator | 2026-03-10 00:41:35.884156 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-10 00:41:35.884173 | orchestrator | Tuesday 10 March 2026 00:41:27 +0000 (0:00:00.370) 0:00:02.128 ********* 2026-03-10 00:41:35.884189 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:41:35.884205 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:41:35.884220 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:41:35.884237 | orchestrator | 2026-03-10 00:41:35.884253 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-10 00:41:35.884269 | orchestrator | Tuesday 10 March 2026 00:41:27 +0000 (0:00:00.608) 0:00:02.736 ********* 2026-03-10 00:41:35.884286 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:41:35.884304 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:41:35.884320 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:41:35.884330 | orchestrator | 2026-03-10 00:41:35.884340 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-10 00:41:35.884351 | orchestrator | Tuesday 10 March 2026 00:41:27 +0000 (0:00:00.244) 0:00:02.980 ********* 2026-03-10 00:41:35.884361 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-10 00:41:35.884376 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-10 00:41:35.884386 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-10 00:41:35.884396 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-10 00:41:35.884406 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-10 00:41:35.884416 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-10 00:41:35.884426 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-10 00:41:35.884436 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-10 00:41:35.884445 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-10 00:41:35.884508 | orchestrator | 2026-03-10 00:41:35.884529 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-10 00:41:35.884546 | orchestrator | Tuesday 10 March 2026 00:41:30 +0000 (0:00:02.150) 0:00:05.130 ********* 2026-03-10 00:41:35.884563 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-10 00:41:35.884581 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-10 00:41:35.884596 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-10 00:41:35.884609 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-10 00:41:35.884619 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-10 00:41:35.884629 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-10 00:41:35.884639 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-10 00:41:35.884648 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-10 00:41:35.884659 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-10 00:41:35.884669 | orchestrator | 2026-03-10 00:41:35.884678 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-10 00:41:35.884687 | orchestrator | Tuesday 10 March 2026 00:41:31 +0000 (0:00:01.590) 0:00:06.721 ********* 2026-03-10 00:41:35.884702 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-10 00:41:35.884717 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-10 00:41:35.884732 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-10 00:41:35.884746 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-10 00:41:35.884761 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-10 00:41:35.884776 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-10 00:41:35.884790 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-10 00:41:35.884813 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-10 00:41:35.884839 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-10 00:41:35.884854 | orchestrator | 2026-03-10 00:41:35.884869 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-10 00:41:35.884883 | orchestrator | Tuesday 10 March 2026 00:41:34 +0000 (0:00:02.520) 0:00:09.242 ********* 2026-03-10 00:41:35.884898 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:41:35.884913 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:41:35.884927 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:41:35.884942 | orchestrator | 2026-03-10 00:41:35.884957 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-10 00:41:35.884971 | orchestrator | Tuesday 10 March 2026 00:41:34 +0000 (0:00:00.727) 0:00:09.969 ********* 2026-03-10 00:41:35.884986 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:41:35.885001 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:41:35.885015 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:41:35.885030 | orchestrator | 2026-03-10 00:41:35.885045 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:41:35.885060 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:41:35.885077 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:41:35.885111 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:41:35.885126 | orchestrator | 2026-03-10 00:41:35.885141 | orchestrator | 2026-03-10 00:41:35.885155 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:41:35.885170 | orchestrator | Tuesday 10 March 2026 00:41:35 +0000 (0:00:00.766) 0:00:10.735 ********* 2026-03-10 00:41:35.885185 | orchestrator | =============================================================================== 2026-03-10 00:41:35.885200 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.52s 2026-03-10 00:41:35.885214 | orchestrator | Check device availability ----------------------------------------------- 2.15s 2026-03-10 00:41:35.885229 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 1.62s 2026-03-10 00:41:35.885243 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.59s 2026-03-10 00:41:35.885258 | orchestrator | Request device events from the kernel ----------------------------------- 0.77s 2026-03-10 00:41:35.885273 | orchestrator | Reload udev rules ------------------------------------------------------- 0.73s 2026-03-10 00:41:35.885287 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.61s 2026-03-10 00:41:35.885302 | orchestrator | Remove all rook related logical devices --------------------------------- 0.37s 2026-03-10 00:41:35.885316 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.24s 2026-03-10 00:41:47.991050 | orchestrator | 2026-03-10 00:41:47 | INFO  | Task 40f93cc6-4a6f-43b2-b3ac-7ebf538a1276 (facts) was prepared for execution. 2026-03-10 00:41:47.991166 | orchestrator | 2026-03-10 00:41:47 | INFO  | It takes a moment until task 40f93cc6-4a6f-43b2-b3ac-7ebf538a1276 (facts) has been started and output is visible here. 2026-03-10 00:42:01.258317 | orchestrator | 2026-03-10 00:42:01.258418 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-10 00:42:01.258434 | orchestrator | 2026-03-10 00:42:01.258445 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-10 00:42:01.258516 | orchestrator | Tuesday 10 March 2026 00:41:52 +0000 (0:00:00.317) 0:00:00.317 ********* 2026-03-10 00:42:01.258526 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:42:01.258537 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:42:01.258547 | orchestrator | ok: [testbed-manager] 2026-03-10 00:42:01.258557 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:42:01.258595 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:42:01.258605 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:42:01.258615 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:42:01.258624 | orchestrator | 2026-03-10 00:42:01.258634 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-10 00:42:01.258644 | orchestrator | Tuesday 10 March 2026 00:41:54 +0000 (0:00:01.535) 0:00:01.853 ********* 2026-03-10 00:42:01.258654 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:42:01.258664 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:42:01.258677 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:42:01.258686 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:42:01.258696 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:42:01.258705 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:42:01.258715 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:42:01.258724 | orchestrator | 2026-03-10 00:42:01.258734 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-10 00:42:01.258744 | orchestrator | 2026-03-10 00:42:01.258753 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-10 00:42:01.258763 | orchestrator | Tuesday 10 March 2026 00:41:55 +0000 (0:00:01.361) 0:00:03.214 ********* 2026-03-10 00:42:01.258772 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:42:01.258782 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:42:01.258791 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:42:01.258801 | orchestrator | ok: [testbed-manager] 2026-03-10 00:42:01.258811 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:42:01.258821 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:42:01.258830 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:42:01.258840 | orchestrator | 2026-03-10 00:42:01.258849 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-10 00:42:01.258859 | orchestrator | 2026-03-10 00:42:01.258870 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-10 00:42:01.258882 | orchestrator | Tuesday 10 March 2026 00:42:00 +0000 (0:00:04.866) 0:00:08.081 ********* 2026-03-10 00:42:01.258893 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:42:01.258903 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:42:01.258915 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:42:01.258926 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:42:01.258954 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:42:01.258984 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:42:01.258995 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:42:01.259006 | orchestrator | 2026-03-10 00:42:01.259018 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:42:01.259030 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:42:01.259042 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:42:01.259053 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:42:01.259064 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:42:01.259075 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:42:01.259087 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:42:01.259098 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:42:01.259109 | orchestrator | 2026-03-10 00:42:01.259120 | orchestrator | 2026-03-10 00:42:01.259132 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:42:01.259151 | orchestrator | Tuesday 10 March 2026 00:42:00 +0000 (0:00:00.546) 0:00:08.627 ********* 2026-03-10 00:42:01.259163 | orchestrator | =============================================================================== 2026-03-10 00:42:01.259174 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.87s 2026-03-10 00:42:01.259185 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.54s 2026-03-10 00:42:01.259196 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.36s 2026-03-10 00:42:01.259207 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2026-03-10 00:42:03.698105 | orchestrator | 2026-03-10 00:42:03 | INFO  | Task e808d653-0f3e-4c2a-b9bd-efce25c5a32d (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-10 00:42:03.698230 | orchestrator | 2026-03-10 00:42:03 | INFO  | It takes a moment until task e808d653-0f3e-4c2a-b9bd-efce25c5a32d (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-10 00:42:15.911288 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-10 00:42:15.911344 | orchestrator | 2.16.14 2026-03-10 00:42:15.911351 | orchestrator | 2026-03-10 00:42:15.911355 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-10 00:42:15.911360 | orchestrator | 2026-03-10 00:42:15.911364 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-10 00:42:15.911368 | orchestrator | Tuesday 10 March 2026 00:42:08 +0000 (0:00:00.354) 0:00:00.354 ********* 2026-03-10 00:42:15.911373 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-10 00:42:15.911378 | orchestrator | 2026-03-10 00:42:15.911381 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-10 00:42:15.911385 | orchestrator | Tuesday 10 March 2026 00:42:08 +0000 (0:00:00.262) 0:00:00.617 ********* 2026-03-10 00:42:15.911389 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:42:15.911393 | orchestrator | 2026-03-10 00:42:15.911397 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:15.911401 | orchestrator | Tuesday 10 March 2026 00:42:08 +0000 (0:00:00.244) 0:00:00.861 ********* 2026-03-10 00:42:15.911405 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-10 00:42:15.911409 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-10 00:42:15.911412 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-10 00:42:15.911416 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-10 00:42:15.911420 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-10 00:42:15.911424 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-10 00:42:15.911427 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-10 00:42:15.911431 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-10 00:42:15.911435 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-10 00:42:15.911439 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-10 00:42:15.911471 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-10 00:42:15.911496 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-10 00:42:15.911505 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-10 00:42:15.911509 | orchestrator | 2026-03-10 00:42:15.911513 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:15.911517 | orchestrator | Tuesday 10 March 2026 00:42:09 +0000 (0:00:00.528) 0:00:01.390 ********* 2026-03-10 00:42:15.911530 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:42:15.911534 | orchestrator | 2026-03-10 00:42:15.911538 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:15.911542 | orchestrator | Tuesday 10 March 2026 00:42:09 +0000 (0:00:00.211) 0:00:01.601 ********* 2026-03-10 00:42:15.911546 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:42:15.911550 | orchestrator | 2026-03-10 00:42:15.911553 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:15.911557 | orchestrator | Tuesday 10 March 2026 00:42:09 +0000 (0:00:00.222) 0:00:01.824 ********* 2026-03-10 00:42:15.911565 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:42:15.911569 | orchestrator | 2026-03-10 00:42:15.911573 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:15.911577 | orchestrator | Tuesday 10 March 2026 00:42:09 +0000 (0:00:00.206) 0:00:02.031 ********* 2026-03-10 00:42:15.911606 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:42:15.911611 | orchestrator | 2026-03-10 00:42:15.911615 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:15.911619 | orchestrator | Tuesday 10 March 2026 00:42:10 +0000 (0:00:00.216) 0:00:02.247 ********* 2026-03-10 00:42:15.911622 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:42:15.911626 | orchestrator | 2026-03-10 00:42:15.911630 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:15.911634 | orchestrator | Tuesday 10 March 2026 00:42:10 +0000 (0:00:00.197) 0:00:02.445 ********* 2026-03-10 00:42:15.911638 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:42:15.911642 | orchestrator | 2026-03-10 00:42:15.911645 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:15.911649 | orchestrator | Tuesday 10 March 2026 00:42:10 +0000 (0:00:00.202) 0:00:02.648 ********* 2026-03-10 00:42:15.911653 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:42:15.911678 | orchestrator | 2026-03-10 00:42:15.911720 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:15.911724 | orchestrator | Tuesday 10 March 2026 00:42:10 +0000 (0:00:00.218) 0:00:02.867 ********* 2026-03-10 00:42:15.911728 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:42:15.911732 | orchestrator | 2026-03-10 00:42:15.911735 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:15.911739 | orchestrator | Tuesday 10 March 2026 00:42:10 +0000 (0:00:00.215) 0:00:03.082 ********* 2026-03-10 00:42:15.911743 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf) 2026-03-10 00:42:15.911747 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf) 2026-03-10 00:42:15.911751 | orchestrator | 2026-03-10 00:42:15.911755 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:15.911767 | orchestrator | Tuesday 10 March 2026 00:42:11 +0000 (0:00:00.412) 0:00:03.495 ********* 2026-03-10 00:42:15.911771 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a252bbef-4467-4af4-a387-4994b1c9e49a) 2026-03-10 00:42:15.911810 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a252bbef-4467-4af4-a387-4994b1c9e49a) 2026-03-10 00:42:15.911814 | orchestrator | 2026-03-10 00:42:15.911818 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:15.911822 | orchestrator | Tuesday 10 March 2026 00:42:12 +0000 (0:00:00.691) 0:00:04.187 ********* 2026-03-10 00:42:15.911825 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f86d111d-1a96-4282-a6fb-aea85f8e4c5d) 2026-03-10 00:42:15.911829 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f86d111d-1a96-4282-a6fb-aea85f8e4c5d) 2026-03-10 00:42:15.911833 | orchestrator | 2026-03-10 00:42:15.911837 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:15.911840 | orchestrator | Tuesday 10 March 2026 00:42:12 +0000 (0:00:00.677) 0:00:04.865 ********* 2026-03-10 00:42:15.911848 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0c217fde-a42a-4606-a0be-96745b6d50a1) 2026-03-10 00:42:15.911852 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0c217fde-a42a-4606-a0be-96745b6d50a1) 2026-03-10 00:42:15.911867 | orchestrator | 2026-03-10 00:42:15.911872 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:15.911877 | orchestrator | Tuesday 10 March 2026 00:42:13 +0000 (0:00:00.949) 0:00:05.814 ********* 2026-03-10 00:42:15.911881 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-10 00:42:15.911885 | orchestrator | 2026-03-10 00:42:15.911889 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:15.911894 | orchestrator | Tuesday 10 March 2026 00:42:14 +0000 (0:00:00.356) 0:00:06.171 ********* 2026-03-10 00:42:15.911901 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-10 00:42:15.911905 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-10 00:42:15.911909 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-10 00:42:15.911914 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-10 00:42:15.911918 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-10 00:42:15.911922 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-10 00:42:15.911926 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-10 00:42:15.911930 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-10 00:42:15.911935 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-10 00:42:15.911939 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-10 00:42:15.911943 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-10 00:42:15.911948 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-10 00:42:15.912006 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-10 00:42:15.912012 | orchestrator | 2026-03-10 00:42:15.912017 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:15.912021 | orchestrator | Tuesday 10 March 2026 00:42:14 +0000 (0:00:00.396) 0:00:06.567 ********* 2026-03-10 00:42:15.912025 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:42:15.912030 | orchestrator | 2026-03-10 00:42:15.912034 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:15.912038 | orchestrator | Tuesday 10 March 2026 00:42:14 +0000 (0:00:00.221) 0:00:06.789 ********* 2026-03-10 00:42:15.912042 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:42:15.912047 | orchestrator | 2026-03-10 00:42:15.912051 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:15.912055 | orchestrator | Tuesday 10 March 2026 00:42:14 +0000 (0:00:00.219) 0:00:07.008 ********* 2026-03-10 00:42:15.912059 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:42:15.912063 | orchestrator | 2026-03-10 00:42:15.912071 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:15.912075 | orchestrator | Tuesday 10 March 2026 00:42:15 +0000 (0:00:00.206) 0:00:07.215 ********* 2026-03-10 00:42:15.912078 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:42:15.912082 | orchestrator | 2026-03-10 00:42:15.912086 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:15.912090 | orchestrator | Tuesday 10 March 2026 00:42:15 +0000 (0:00:00.190) 0:00:07.405 ********* 2026-03-10 00:42:15.912093 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:42:15.912100 | orchestrator | 2026-03-10 00:42:15.912104 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:15.912108 | orchestrator | Tuesday 10 March 2026 00:42:15 +0000 (0:00:00.192) 0:00:07.597 ********* 2026-03-10 00:42:15.912112 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:42:15.912115 | orchestrator | 2026-03-10 00:42:15.912119 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:15.912123 | orchestrator | Tuesday 10 March 2026 00:42:15 +0000 (0:00:00.198) 0:00:07.795 ********* 2026-03-10 00:42:15.912126 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:42:15.912130 | orchestrator | 2026-03-10 00:42:15.912137 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:23.340180 | orchestrator | Tuesday 10 March 2026 00:42:15 +0000 (0:00:00.187) 0:00:07.983 ********* 2026-03-10 00:42:23.340320 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:42:23.340352 | orchestrator | 2026-03-10 00:42:23.340366 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:23.340378 | orchestrator | Tuesday 10 March 2026 00:42:16 +0000 (0:00:00.196) 0:00:08.180 ********* 2026-03-10 00:42:23.340389 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-10 00:42:23.340401 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-10 00:42:23.340412 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-10 00:42:23.340423 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-10 00:42:23.340434 | orchestrator | 2026-03-10 00:42:23.340476 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:23.340488 | orchestrator | Tuesday 10 March 2026 00:42:16 +0000 (0:00:00.865) 0:00:09.045 ********* 2026-03-10 00:42:23.340498 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:42:23.340509 | orchestrator | 2026-03-10 00:42:23.340520 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:23.340531 | orchestrator | Tuesday 10 March 2026 00:42:17 +0000 (0:00:00.196) 0:00:09.241 ********* 2026-03-10 00:42:23.340542 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:42:23.340553 | orchestrator | 2026-03-10 00:42:23.340564 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:23.340575 | orchestrator | Tuesday 10 March 2026 00:42:17 +0000 (0:00:00.185) 0:00:09.427 ********* 2026-03-10 00:42:23.340586 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:42:23.340597 | orchestrator | 2026-03-10 00:42:23.340607 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:23.340618 | orchestrator | Tuesday 10 March 2026 00:42:17 +0000 (0:00:00.194) 0:00:09.622 ********* 2026-03-10 00:42:23.340629 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:42:23.340640 | orchestrator | 2026-03-10 00:42:23.340650 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-10 00:42:23.340661 | orchestrator | Tuesday 10 March 2026 00:42:17 +0000 (0:00:00.200) 0:00:09.822 ********* 2026-03-10 00:42:23.340672 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-10 00:42:23.340687 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-10 00:42:23.340704 | orchestrator | 2026-03-10 00:42:23.340717 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-10 00:42:23.340730 | orchestrator | Tuesday 10 March 2026 00:42:17 +0000 (0:00:00.153) 0:00:09.975 ********* 2026-03-10 00:42:23.340756 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:42:23.340770 | orchestrator | 2026-03-10 00:42:23.340792 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-10 00:42:23.340825 | orchestrator | Tuesday 10 March 2026 00:42:18 +0000 (0:00:00.123) 0:00:10.098 ********* 2026-03-10 00:42:23.340838 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:42:23.340850 | orchestrator | 2026-03-10 00:42:23.340862 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-10 00:42:23.340875 | orchestrator | Tuesday 10 March 2026 00:42:18 +0000 (0:00:00.140) 0:00:10.239 ********* 2026-03-10 00:42:23.340912 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:42:23.340925 | orchestrator | 2026-03-10 00:42:23.340938 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-10 00:42:23.340950 | orchestrator | Tuesday 10 March 2026 00:42:18 +0000 (0:00:00.145) 0:00:10.385 ********* 2026-03-10 00:42:23.340963 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:42:23.341018 | orchestrator | 2026-03-10 00:42:23.341031 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-10 00:42:23.341042 | orchestrator | Tuesday 10 March 2026 00:42:18 +0000 (0:00:00.124) 0:00:10.509 ********* 2026-03-10 00:42:23.341056 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '120d91ae-c06d-5ca9-b450-85f2d491e96a'}}) 2026-03-10 00:42:23.341070 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '07a8a029-b5c8-5530-8cc4-5b47064bbf55'}}) 2026-03-10 00:42:23.341080 | orchestrator | 2026-03-10 00:42:23.341091 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-10 00:42:23.341102 | orchestrator | Tuesday 10 March 2026 00:42:18 +0000 (0:00:00.172) 0:00:10.682 ********* 2026-03-10 00:42:23.341114 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '120d91ae-c06d-5ca9-b450-85f2d491e96a'}})  2026-03-10 00:42:23.341134 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '07a8a029-b5c8-5530-8cc4-5b47064bbf55'}})  2026-03-10 00:42:23.341145 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:42:23.341156 | orchestrator | 2026-03-10 00:42:23.341166 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-10 00:42:23.341177 | orchestrator | Tuesday 10 March 2026 00:42:18 +0000 (0:00:00.146) 0:00:10.829 ********* 2026-03-10 00:42:23.341188 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '120d91ae-c06d-5ca9-b450-85f2d491e96a'}})  2026-03-10 00:42:23.341199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '07a8a029-b5c8-5530-8cc4-5b47064bbf55'}})  2026-03-10 00:42:23.341210 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:42:23.341220 | orchestrator | 2026-03-10 00:42:23.341231 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-10 00:42:23.341242 | orchestrator | Tuesday 10 March 2026 00:42:19 +0000 (0:00:00.340) 0:00:11.170 ********* 2026-03-10 00:42:23.341252 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '120d91ae-c06d-5ca9-b450-85f2d491e96a'}})  2026-03-10 00:42:23.341283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '07a8a029-b5c8-5530-8cc4-5b47064bbf55'}})  2026-03-10 00:42:23.341294 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:42:23.341305 | orchestrator | 2026-03-10 00:42:23.341316 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-10 00:42:23.341327 | orchestrator | Tuesday 10 March 2026 00:42:19 +0000 (0:00:00.164) 0:00:11.334 ********* 2026-03-10 00:42:23.341337 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:42:23.341348 | orchestrator | 2026-03-10 00:42:23.341359 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-10 00:42:23.341369 | orchestrator | Tuesday 10 March 2026 00:42:19 +0000 (0:00:00.150) 0:00:11.485 ********* 2026-03-10 00:42:23.341380 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:42:23.341390 | orchestrator | 2026-03-10 00:42:23.341408 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-10 00:42:23.341419 | orchestrator | Tuesday 10 March 2026 00:42:19 +0000 (0:00:00.158) 0:00:11.644 ********* 2026-03-10 00:42:23.341430 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:42:23.341462 | orchestrator | 2026-03-10 00:42:23.341475 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-10 00:42:23.341489 | orchestrator | Tuesday 10 March 2026 00:42:19 +0000 (0:00:00.140) 0:00:11.785 ********* 2026-03-10 00:42:23.341520 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:42:23.341540 | orchestrator | 2026-03-10 00:42:23.341559 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-10 00:42:23.341577 | orchestrator | Tuesday 10 March 2026 00:42:19 +0000 (0:00:00.138) 0:00:11.923 ********* 2026-03-10 00:42:23.341595 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:42:23.341614 | orchestrator | 2026-03-10 00:42:23.341633 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-10 00:42:23.341651 | orchestrator | Tuesday 10 March 2026 00:42:19 +0000 (0:00:00.132) 0:00:12.055 ********* 2026-03-10 00:42:23.341668 | orchestrator | ok: [testbed-node-3] => { 2026-03-10 00:42:23.341679 | orchestrator |  "ceph_osd_devices": { 2026-03-10 00:42:23.341690 | orchestrator |  "sdb": { 2026-03-10 00:42:23.341701 | orchestrator |  "osd_lvm_uuid": "120d91ae-c06d-5ca9-b450-85f2d491e96a" 2026-03-10 00:42:23.341716 | orchestrator |  }, 2026-03-10 00:42:23.341733 | orchestrator |  "sdc": { 2026-03-10 00:42:23.341752 | orchestrator |  "osd_lvm_uuid": "07a8a029-b5c8-5530-8cc4-5b47064bbf55" 2026-03-10 00:42:23.341771 | orchestrator |  } 2026-03-10 00:42:23.341789 | orchestrator |  } 2026-03-10 00:42:23.341818 | orchestrator | } 2026-03-10 00:42:23.341839 | orchestrator | 2026-03-10 00:42:23.341857 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-10 00:42:23.341875 | orchestrator | Tuesday 10 March 2026 00:42:20 +0000 (0:00:00.139) 0:00:12.195 ********* 2026-03-10 00:42:23.341892 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:42:23.341909 | orchestrator | 2026-03-10 00:42:23.341926 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-10 00:42:23.341944 | orchestrator | Tuesday 10 March 2026 00:42:20 +0000 (0:00:00.137) 0:00:12.333 ********* 2026-03-10 00:42:23.341961 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:42:23.341977 | orchestrator | 2026-03-10 00:42:23.341994 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-10 00:42:23.342012 | orchestrator | Tuesday 10 March 2026 00:42:20 +0000 (0:00:00.125) 0:00:12.458 ********* 2026-03-10 00:42:23.342110 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:42:23.342188 | orchestrator | 2026-03-10 00:42:23.342213 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-10 00:42:23.342231 | orchestrator | Tuesday 10 March 2026 00:42:20 +0000 (0:00:00.138) 0:00:12.597 ********* 2026-03-10 00:42:23.342249 | orchestrator | changed: [testbed-node-3] => { 2026-03-10 00:42:23.342268 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-10 00:42:23.342286 | orchestrator |  "ceph_osd_devices": { 2026-03-10 00:42:23.342303 | orchestrator |  "sdb": { 2026-03-10 00:42:23.342321 | orchestrator |  "osd_lvm_uuid": "120d91ae-c06d-5ca9-b450-85f2d491e96a" 2026-03-10 00:42:23.342339 | orchestrator |  }, 2026-03-10 00:42:23.342379 | orchestrator |  "sdc": { 2026-03-10 00:42:23.342401 | orchestrator |  "osd_lvm_uuid": "07a8a029-b5c8-5530-8cc4-5b47064bbf55" 2026-03-10 00:42:23.342436 | orchestrator |  } 2026-03-10 00:42:23.342516 | orchestrator |  }, 2026-03-10 00:42:23.342536 | orchestrator |  "lvm_volumes": [ 2026-03-10 00:42:23.342555 | orchestrator |  { 2026-03-10 00:42:23.342574 | orchestrator |  "data": "osd-block-120d91ae-c06d-5ca9-b450-85f2d491e96a", 2026-03-10 00:42:23.342593 | orchestrator |  "data_vg": "ceph-120d91ae-c06d-5ca9-b450-85f2d491e96a" 2026-03-10 00:42:23.342610 | orchestrator |  }, 2026-03-10 00:42:23.342629 | orchestrator |  { 2026-03-10 00:42:23.342647 | orchestrator |  "data": "osd-block-07a8a029-b5c8-5530-8cc4-5b47064bbf55", 2026-03-10 00:42:23.342666 | orchestrator |  "data_vg": "ceph-07a8a029-b5c8-5530-8cc4-5b47064bbf55" 2026-03-10 00:42:23.342684 | orchestrator |  } 2026-03-10 00:42:23.342703 | orchestrator |  ] 2026-03-10 00:42:23.342722 | orchestrator |  } 2026-03-10 00:42:23.342740 | orchestrator | } 2026-03-10 00:42:23.342772 | orchestrator | 2026-03-10 00:42:23.342784 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-10 00:42:23.342795 | orchestrator | Tuesday 10 March 2026 00:42:20 +0000 (0:00:00.423) 0:00:13.020 ********* 2026-03-10 00:42:23.342806 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-10 00:42:23.342817 | orchestrator | 2026-03-10 00:42:23.342844 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-10 00:42:23.342856 | orchestrator | 2026-03-10 00:42:23.342867 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-10 00:42:23.342877 | orchestrator | Tuesday 10 March 2026 00:42:22 +0000 (0:00:01.853) 0:00:14.874 ********* 2026-03-10 00:42:23.342888 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-10 00:42:23.342899 | orchestrator | 2026-03-10 00:42:23.342910 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-10 00:42:23.342920 | orchestrator | Tuesday 10 March 2026 00:42:23 +0000 (0:00:00.258) 0:00:15.132 ********* 2026-03-10 00:42:23.342931 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:42:23.342942 | orchestrator | 2026-03-10 00:42:23.342969 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:31.535310 | orchestrator | Tuesday 10 March 2026 00:42:23 +0000 (0:00:00.281) 0:00:15.414 ********* 2026-03-10 00:42:31.535410 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-10 00:42:31.535426 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-10 00:42:31.535508 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-10 00:42:31.535519 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-10 00:42:31.535527 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-10 00:42:31.535535 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-10 00:42:31.535543 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-10 00:42:31.535555 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-10 00:42:31.535581 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-10 00:42:31.535604 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-10 00:42:31.535617 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-10 00:42:31.535630 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-10 00:42:31.535648 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-10 00:42:31.535661 | orchestrator | 2026-03-10 00:42:31.535675 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:31.535688 | orchestrator | Tuesday 10 March 2026 00:42:23 +0000 (0:00:00.373) 0:00:15.788 ********* 2026-03-10 00:42:31.535701 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:42:31.535715 | orchestrator | 2026-03-10 00:42:31.535727 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:31.535740 | orchestrator | Tuesday 10 March 2026 00:42:23 +0000 (0:00:00.194) 0:00:15.983 ********* 2026-03-10 00:42:31.535753 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:42:31.535764 | orchestrator | 2026-03-10 00:42:31.535776 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:31.535788 | orchestrator | Tuesday 10 March 2026 00:42:24 +0000 (0:00:00.266) 0:00:16.249 ********* 2026-03-10 00:42:31.535799 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:42:31.535811 | orchestrator | 2026-03-10 00:42:31.535824 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:31.535836 | orchestrator | Tuesday 10 March 2026 00:42:24 +0000 (0:00:00.190) 0:00:16.440 ********* 2026-03-10 00:42:31.535876 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:42:31.535890 | orchestrator | 2026-03-10 00:42:31.535903 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:31.535916 | orchestrator | Tuesday 10 March 2026 00:42:24 +0000 (0:00:00.202) 0:00:16.644 ********* 2026-03-10 00:42:31.535929 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:42:31.535941 | orchestrator | 2026-03-10 00:42:31.535953 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:31.535967 | orchestrator | Tuesday 10 March 2026 00:42:25 +0000 (0:00:00.691) 0:00:17.335 ********* 2026-03-10 00:42:31.536001 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:42:31.536015 | orchestrator | 2026-03-10 00:42:31.536027 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:31.536039 | orchestrator | Tuesday 10 March 2026 00:42:25 +0000 (0:00:00.223) 0:00:17.559 ********* 2026-03-10 00:42:31.536051 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:42:31.536065 | orchestrator | 2026-03-10 00:42:31.536078 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:31.536091 | orchestrator | Tuesday 10 March 2026 00:42:25 +0000 (0:00:00.208) 0:00:17.767 ********* 2026-03-10 00:42:31.536103 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:42:31.536116 | orchestrator | 2026-03-10 00:42:31.536146 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:31.536159 | orchestrator | Tuesday 10 March 2026 00:42:25 +0000 (0:00:00.212) 0:00:17.980 ********* 2026-03-10 00:42:31.536172 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8) 2026-03-10 00:42:31.536186 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8) 2026-03-10 00:42:31.536199 | orchestrator | 2026-03-10 00:42:31.536211 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:31.536223 | orchestrator | Tuesday 10 March 2026 00:42:26 +0000 (0:00:00.410) 0:00:18.391 ********* 2026-03-10 00:42:31.536235 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1d3a34ea-f16d-4f10-8269-5937a58b6a14) 2026-03-10 00:42:31.536247 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1d3a34ea-f16d-4f10-8269-5937a58b6a14) 2026-03-10 00:42:31.536260 | orchestrator | 2026-03-10 00:42:31.536272 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:31.536284 | orchestrator | Tuesday 10 March 2026 00:42:26 +0000 (0:00:00.420) 0:00:18.812 ********* 2026-03-10 00:42:31.536296 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b7d8aa34-d63a-4976-a853-b9d2680122e0) 2026-03-10 00:42:31.536308 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b7d8aa34-d63a-4976-a853-b9d2680122e0) 2026-03-10 00:42:31.536320 | orchestrator | 2026-03-10 00:42:31.536333 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:31.536365 | orchestrator | Tuesday 10 March 2026 00:42:27 +0000 (0:00:00.418) 0:00:19.231 ********* 2026-03-10 00:42:31.536377 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_497bc817-8b42-47c9-935c-36bd3332f08b) 2026-03-10 00:42:31.536390 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_497bc817-8b42-47c9-935c-36bd3332f08b) 2026-03-10 00:42:31.536402 | orchestrator | 2026-03-10 00:42:31.536414 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:31.536427 | orchestrator | Tuesday 10 March 2026 00:42:27 +0000 (0:00:00.419) 0:00:19.651 ********* 2026-03-10 00:42:31.536459 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-10 00:42:31.536471 | orchestrator | 2026-03-10 00:42:31.536483 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:31.536495 | orchestrator | Tuesday 10 March 2026 00:42:27 +0000 (0:00:00.349) 0:00:20.000 ********* 2026-03-10 00:42:31.536506 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-10 00:42:31.536528 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-10 00:42:31.536541 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-10 00:42:31.536553 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-10 00:42:31.536566 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-10 00:42:31.536578 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-10 00:42:31.536590 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-10 00:42:31.536602 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-10 00:42:31.536614 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-10 00:42:31.536626 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-10 00:42:31.536638 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-10 00:42:31.536651 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-10 00:42:31.536663 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-10 00:42:31.536675 | orchestrator | 2026-03-10 00:42:31.536687 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:31.536699 | orchestrator | Tuesday 10 March 2026 00:42:28 +0000 (0:00:00.388) 0:00:20.389 ********* 2026-03-10 00:42:31.536712 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:42:31.536724 | orchestrator | 2026-03-10 00:42:31.536736 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:31.536748 | orchestrator | Tuesday 10 March 2026 00:42:28 +0000 (0:00:00.688) 0:00:21.077 ********* 2026-03-10 00:42:31.536760 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:42:31.536772 | orchestrator | 2026-03-10 00:42:31.536785 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:31.536797 | orchestrator | Tuesday 10 March 2026 00:42:29 +0000 (0:00:00.207) 0:00:21.284 ********* 2026-03-10 00:42:31.536809 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:42:31.536821 | orchestrator | 2026-03-10 00:42:31.536833 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:31.536845 | orchestrator | Tuesday 10 March 2026 00:42:29 +0000 (0:00:00.195) 0:00:21.480 ********* 2026-03-10 00:42:31.536865 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:42:31.536878 | orchestrator | 2026-03-10 00:42:31.536890 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:31.536902 | orchestrator | Tuesday 10 March 2026 00:42:29 +0000 (0:00:00.196) 0:00:21.676 ********* 2026-03-10 00:42:31.536915 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:42:31.536927 | orchestrator | 2026-03-10 00:42:31.536939 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:31.536951 | orchestrator | Tuesday 10 March 2026 00:42:29 +0000 (0:00:00.197) 0:00:21.873 ********* 2026-03-10 00:42:31.536963 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:42:31.536975 | orchestrator | 2026-03-10 00:42:31.536987 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:31.537000 | orchestrator | Tuesday 10 March 2026 00:42:29 +0000 (0:00:00.206) 0:00:22.080 ********* 2026-03-10 00:42:31.537012 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:42:31.537024 | orchestrator | 2026-03-10 00:42:31.537036 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:31.537048 | orchestrator | Tuesday 10 March 2026 00:42:30 +0000 (0:00:00.242) 0:00:22.322 ********* 2026-03-10 00:42:31.537061 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:42:31.537080 | orchestrator | 2026-03-10 00:42:31.537092 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:31.537105 | orchestrator | Tuesday 10 March 2026 00:42:30 +0000 (0:00:00.223) 0:00:22.546 ********* 2026-03-10 00:42:31.537117 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-10 00:42:31.537130 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-10 00:42:31.537142 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-10 00:42:31.537155 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-10 00:42:31.537167 | orchestrator | 2026-03-10 00:42:31.537179 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:31.537191 | orchestrator | Tuesday 10 March 2026 00:42:31 +0000 (0:00:00.878) 0:00:23.425 ********* 2026-03-10 00:42:31.537205 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:42:37.386201 | orchestrator | 2026-03-10 00:42:37.386296 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:37.386306 | orchestrator | Tuesday 10 March 2026 00:42:31 +0000 (0:00:00.186) 0:00:23.612 ********* 2026-03-10 00:42:37.386313 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:42:37.386320 | orchestrator | 2026-03-10 00:42:37.386325 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:37.386331 | orchestrator | Tuesday 10 March 2026 00:42:31 +0000 (0:00:00.195) 0:00:23.808 ********* 2026-03-10 00:42:37.386336 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:42:37.386341 | orchestrator | 2026-03-10 00:42:37.386346 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:37.386351 | orchestrator | Tuesday 10 March 2026 00:42:31 +0000 (0:00:00.207) 0:00:24.015 ********* 2026-03-10 00:42:37.386356 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:42:37.386361 | orchestrator | 2026-03-10 00:42:37.386366 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-10 00:42:37.386371 | orchestrator | Tuesday 10 March 2026 00:42:32 +0000 (0:00:00.582) 0:00:24.598 ********* 2026-03-10 00:42:37.386377 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-10 00:42:37.386382 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-10 00:42:37.386387 | orchestrator | 2026-03-10 00:42:37.386392 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-10 00:42:37.386397 | orchestrator | Tuesday 10 March 2026 00:42:32 +0000 (0:00:00.137) 0:00:24.735 ********* 2026-03-10 00:42:37.386402 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:42:37.386407 | orchestrator | 2026-03-10 00:42:37.386413 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-10 00:42:37.386418 | orchestrator | Tuesday 10 March 2026 00:42:32 +0000 (0:00:00.124) 0:00:24.860 ********* 2026-03-10 00:42:37.386423 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:42:37.386428 | orchestrator | 2026-03-10 00:42:37.386496 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-10 00:42:37.386505 | orchestrator | Tuesday 10 March 2026 00:42:32 +0000 (0:00:00.139) 0:00:24.999 ********* 2026-03-10 00:42:37.386512 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:42:37.386520 | orchestrator | 2026-03-10 00:42:37.386528 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-10 00:42:37.386536 | orchestrator | Tuesday 10 March 2026 00:42:33 +0000 (0:00:00.134) 0:00:25.134 ********* 2026-03-10 00:42:37.386543 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:42:37.386552 | orchestrator | 2026-03-10 00:42:37.386560 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-10 00:42:37.386568 | orchestrator | Tuesday 10 March 2026 00:42:33 +0000 (0:00:00.133) 0:00:25.268 ********* 2026-03-10 00:42:37.386578 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d'}}) 2026-03-10 00:42:37.386587 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e8bae358-0d63-5788-ab6b-8bf409d6bda1'}}) 2026-03-10 00:42:37.386621 | orchestrator | 2026-03-10 00:42:37.386628 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-10 00:42:37.386633 | orchestrator | Tuesday 10 March 2026 00:42:33 +0000 (0:00:00.148) 0:00:25.416 ********* 2026-03-10 00:42:37.386639 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d'}})  2026-03-10 00:42:37.386646 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e8bae358-0d63-5788-ab6b-8bf409d6bda1'}})  2026-03-10 00:42:37.386652 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:42:37.386657 | orchestrator | 2026-03-10 00:42:37.386662 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-10 00:42:37.386667 | orchestrator | Tuesday 10 March 2026 00:42:33 +0000 (0:00:00.144) 0:00:25.561 ********* 2026-03-10 00:42:37.386672 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d'}})  2026-03-10 00:42:37.386691 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e8bae358-0d63-5788-ab6b-8bf409d6bda1'}})  2026-03-10 00:42:37.386697 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:42:37.386702 | orchestrator | 2026-03-10 00:42:37.386707 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-10 00:42:37.386712 | orchestrator | Tuesday 10 March 2026 00:42:33 +0000 (0:00:00.133) 0:00:25.694 ********* 2026-03-10 00:42:37.386717 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d'}})  2026-03-10 00:42:37.386723 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e8bae358-0d63-5788-ab6b-8bf409d6bda1'}})  2026-03-10 00:42:37.386729 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:42:37.386737 | orchestrator | 2026-03-10 00:42:37.386745 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-10 00:42:37.386754 | orchestrator | Tuesday 10 March 2026 00:42:33 +0000 (0:00:00.129) 0:00:25.824 ********* 2026-03-10 00:42:37.386761 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:42:37.386770 | orchestrator | 2026-03-10 00:42:37.386778 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-10 00:42:37.386786 | orchestrator | Tuesday 10 March 2026 00:42:33 +0000 (0:00:00.123) 0:00:25.948 ********* 2026-03-10 00:42:37.386794 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:42:37.386801 | orchestrator | 2026-03-10 00:42:37.386808 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-10 00:42:37.386816 | orchestrator | Tuesday 10 March 2026 00:42:33 +0000 (0:00:00.115) 0:00:26.063 ********* 2026-03-10 00:42:37.386842 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:42:37.386851 | orchestrator | 2026-03-10 00:42:37.386859 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-10 00:42:37.386868 | orchestrator | Tuesday 10 March 2026 00:42:34 +0000 (0:00:00.273) 0:00:26.336 ********* 2026-03-10 00:42:37.386876 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:42:37.386884 | orchestrator | 2026-03-10 00:42:37.386893 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-10 00:42:37.386901 | orchestrator | Tuesday 10 March 2026 00:42:34 +0000 (0:00:00.120) 0:00:26.457 ********* 2026-03-10 00:42:37.386909 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:42:37.386917 | orchestrator | 2026-03-10 00:42:37.386926 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-10 00:42:37.386935 | orchestrator | Tuesday 10 March 2026 00:42:34 +0000 (0:00:00.112) 0:00:26.569 ********* 2026-03-10 00:42:37.386943 | orchestrator | ok: [testbed-node-4] => { 2026-03-10 00:42:37.386952 | orchestrator |  "ceph_osd_devices": { 2026-03-10 00:42:37.386962 | orchestrator |  "sdb": { 2026-03-10 00:42:37.386970 | orchestrator |  "osd_lvm_uuid": "ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d" 2026-03-10 00:42:37.386978 | orchestrator |  }, 2026-03-10 00:42:37.386998 | orchestrator |  "sdc": { 2026-03-10 00:42:37.387006 | orchestrator |  "osd_lvm_uuid": "e8bae358-0d63-5788-ab6b-8bf409d6bda1" 2026-03-10 00:42:37.387014 | orchestrator |  } 2026-03-10 00:42:37.387023 | orchestrator |  } 2026-03-10 00:42:37.387031 | orchestrator | } 2026-03-10 00:42:37.387039 | orchestrator | 2026-03-10 00:42:37.387048 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-10 00:42:37.387057 | orchestrator | Tuesday 10 March 2026 00:42:34 +0000 (0:00:00.101) 0:00:26.671 ********* 2026-03-10 00:42:37.387065 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:42:37.387074 | orchestrator | 2026-03-10 00:42:37.387082 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-10 00:42:37.387091 | orchestrator | Tuesday 10 March 2026 00:42:34 +0000 (0:00:00.139) 0:00:26.810 ********* 2026-03-10 00:42:37.387101 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:42:37.387109 | orchestrator | 2026-03-10 00:42:37.387118 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-10 00:42:37.387125 | orchestrator | Tuesday 10 March 2026 00:42:34 +0000 (0:00:00.107) 0:00:26.918 ********* 2026-03-10 00:42:37.387134 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:42:37.387141 | orchestrator | 2026-03-10 00:42:37.387149 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-10 00:42:37.387158 | orchestrator | Tuesday 10 March 2026 00:42:34 +0000 (0:00:00.088) 0:00:27.006 ********* 2026-03-10 00:42:37.387167 | orchestrator | changed: [testbed-node-4] => { 2026-03-10 00:42:37.387175 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-10 00:42:37.387184 | orchestrator |  "ceph_osd_devices": { 2026-03-10 00:42:37.387192 | orchestrator |  "sdb": { 2026-03-10 00:42:37.387201 | orchestrator |  "osd_lvm_uuid": "ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d" 2026-03-10 00:42:37.387210 | orchestrator |  }, 2026-03-10 00:42:37.387219 | orchestrator |  "sdc": { 2026-03-10 00:42:37.387227 | orchestrator |  "osd_lvm_uuid": "e8bae358-0d63-5788-ab6b-8bf409d6bda1" 2026-03-10 00:42:37.387232 | orchestrator |  } 2026-03-10 00:42:37.387238 | orchestrator |  }, 2026-03-10 00:42:37.387243 | orchestrator |  "lvm_volumes": [ 2026-03-10 00:42:37.387248 | orchestrator |  { 2026-03-10 00:42:37.387253 | orchestrator |  "data": "osd-block-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d", 2026-03-10 00:42:37.387259 | orchestrator |  "data_vg": "ceph-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d" 2026-03-10 00:42:37.387264 | orchestrator |  }, 2026-03-10 00:42:37.387269 | orchestrator |  { 2026-03-10 00:42:37.387274 | orchestrator |  "data": "osd-block-e8bae358-0d63-5788-ab6b-8bf409d6bda1", 2026-03-10 00:42:37.387279 | orchestrator |  "data_vg": "ceph-e8bae358-0d63-5788-ab6b-8bf409d6bda1" 2026-03-10 00:42:37.387284 | orchestrator |  } 2026-03-10 00:42:37.387289 | orchestrator |  ] 2026-03-10 00:42:37.387294 | orchestrator |  } 2026-03-10 00:42:37.387300 | orchestrator | } 2026-03-10 00:42:37.387305 | orchestrator | 2026-03-10 00:42:37.387310 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-10 00:42:37.387315 | orchestrator | Tuesday 10 March 2026 00:42:35 +0000 (0:00:00.148) 0:00:27.154 ********* 2026-03-10 00:42:37.387320 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-10 00:42:37.387325 | orchestrator | 2026-03-10 00:42:37.387333 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-10 00:42:37.387342 | orchestrator | 2026-03-10 00:42:37.387350 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-10 00:42:37.387356 | orchestrator | Tuesday 10 March 2026 00:42:36 +0000 (0:00:01.030) 0:00:28.185 ********* 2026-03-10 00:42:37.387361 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-10 00:42:37.387366 | orchestrator | 2026-03-10 00:42:37.387371 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-10 00:42:37.387383 | orchestrator | Tuesday 10 March 2026 00:42:36 +0000 (0:00:00.611) 0:00:28.797 ********* 2026-03-10 00:42:37.387388 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:42:37.387393 | orchestrator | 2026-03-10 00:42:37.387398 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:37.387403 | orchestrator | Tuesday 10 March 2026 00:42:36 +0000 (0:00:00.270) 0:00:29.067 ********* 2026-03-10 00:42:37.387408 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-10 00:42:37.387413 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-10 00:42:37.387426 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-10 00:42:37.387450 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-10 00:42:37.387459 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-10 00:42:37.387473 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-10 00:42:46.296947 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-10 00:42:46.297040 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-10 00:42:46.297055 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-10 00:42:46.297067 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-10 00:42:46.297078 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-10 00:42:46.297089 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-10 00:42:46.297100 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-10 00:42:46.297111 | orchestrator | 2026-03-10 00:42:46.297123 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:46.297134 | orchestrator | Tuesday 10 March 2026 00:42:37 +0000 (0:00:00.385) 0:00:29.452 ********* 2026-03-10 00:42:46.297145 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:42:46.297157 | orchestrator | 2026-03-10 00:42:46.297168 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:46.297179 | orchestrator | Tuesday 10 March 2026 00:42:37 +0000 (0:00:00.267) 0:00:29.720 ********* 2026-03-10 00:42:46.297190 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:42:46.297201 | orchestrator | 2026-03-10 00:42:46.297212 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:46.297223 | orchestrator | Tuesday 10 March 2026 00:42:37 +0000 (0:00:00.215) 0:00:29.936 ********* 2026-03-10 00:42:46.297233 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:42:46.297244 | orchestrator | 2026-03-10 00:42:46.297255 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:46.297266 | orchestrator | Tuesday 10 March 2026 00:42:38 +0000 (0:00:00.231) 0:00:30.167 ********* 2026-03-10 00:42:46.297277 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:42:46.297288 | orchestrator | 2026-03-10 00:42:46.297298 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:46.297310 | orchestrator | Tuesday 10 March 2026 00:42:38 +0000 (0:00:00.227) 0:00:30.395 ********* 2026-03-10 00:42:46.297320 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:42:46.297331 | orchestrator | 2026-03-10 00:42:46.297342 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:46.297353 | orchestrator | Tuesday 10 March 2026 00:42:38 +0000 (0:00:00.200) 0:00:30.595 ********* 2026-03-10 00:42:46.297364 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:42:46.297374 | orchestrator | 2026-03-10 00:42:46.297386 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:46.297397 | orchestrator | Tuesday 10 March 2026 00:42:38 +0000 (0:00:00.245) 0:00:30.841 ********* 2026-03-10 00:42:46.297471 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:42:46.297485 | orchestrator | 2026-03-10 00:42:46.297496 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:46.297507 | orchestrator | Tuesday 10 March 2026 00:42:38 +0000 (0:00:00.233) 0:00:31.074 ********* 2026-03-10 00:42:46.297518 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:42:46.297529 | orchestrator | 2026-03-10 00:42:46.297540 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:46.297551 | orchestrator | Tuesday 10 March 2026 00:42:39 +0000 (0:00:00.240) 0:00:31.314 ********* 2026-03-10 00:42:46.297562 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb) 2026-03-10 00:42:46.297574 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb) 2026-03-10 00:42:46.297585 | orchestrator | 2026-03-10 00:42:46.297596 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:46.297606 | orchestrator | Tuesday 10 March 2026 00:42:40 +0000 (0:00:00.952) 0:00:32.267 ********* 2026-03-10 00:42:46.297617 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fbc5b701-e3a2-4a57-9c09-bea5a2018a77) 2026-03-10 00:42:46.297628 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fbc5b701-e3a2-4a57-9c09-bea5a2018a77) 2026-03-10 00:42:46.297639 | orchestrator | 2026-03-10 00:42:46.297650 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:46.297661 | orchestrator | Tuesday 10 March 2026 00:42:40 +0000 (0:00:00.459) 0:00:32.726 ********* 2026-03-10 00:42:46.297672 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_01fdf314-9dac-4cf9-86b2-8624031a3730) 2026-03-10 00:42:46.297683 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_01fdf314-9dac-4cf9-86b2-8624031a3730) 2026-03-10 00:42:46.297694 | orchestrator | 2026-03-10 00:42:46.297705 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:46.297716 | orchestrator | Tuesday 10 March 2026 00:42:41 +0000 (0:00:00.468) 0:00:33.195 ********* 2026-03-10 00:42:46.297727 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1827d390-92d5-42dc-b1df-e99337d10b88) 2026-03-10 00:42:46.297738 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1827d390-92d5-42dc-b1df-e99337d10b88) 2026-03-10 00:42:46.297749 | orchestrator | 2026-03-10 00:42:46.297760 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:42:46.297771 | orchestrator | Tuesday 10 March 2026 00:42:41 +0000 (0:00:00.465) 0:00:33.661 ********* 2026-03-10 00:42:46.297781 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-10 00:42:46.297792 | orchestrator | 2026-03-10 00:42:46.297803 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:46.297831 | orchestrator | Tuesday 10 March 2026 00:42:41 +0000 (0:00:00.400) 0:00:34.062 ********* 2026-03-10 00:42:46.297842 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-10 00:42:46.297853 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-10 00:42:46.297864 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-10 00:42:46.297875 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-10 00:42:46.297886 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-10 00:42:46.297896 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-10 00:42:46.297907 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-10 00:42:46.297919 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-10 00:42:46.297937 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-10 00:42:46.297948 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-10 00:42:46.297959 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-10 00:42:46.297986 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-10 00:42:46.297997 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-10 00:42:46.298008 | orchestrator | 2026-03-10 00:42:46.298064 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:46.298076 | orchestrator | Tuesday 10 March 2026 00:42:42 +0000 (0:00:00.415) 0:00:34.477 ********* 2026-03-10 00:42:46.298087 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:42:46.298098 | orchestrator | 2026-03-10 00:42:46.298108 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:46.298119 | orchestrator | Tuesday 10 March 2026 00:42:42 +0000 (0:00:00.198) 0:00:34.675 ********* 2026-03-10 00:42:46.298130 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:42:46.298141 | orchestrator | 2026-03-10 00:42:46.298152 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:46.298162 | orchestrator | Tuesday 10 March 2026 00:42:42 +0000 (0:00:00.192) 0:00:34.868 ********* 2026-03-10 00:42:46.298178 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:42:46.298190 | orchestrator | 2026-03-10 00:42:46.298201 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:46.298212 | orchestrator | Tuesday 10 March 2026 00:42:42 +0000 (0:00:00.212) 0:00:35.080 ********* 2026-03-10 00:42:46.298223 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:42:46.298233 | orchestrator | 2026-03-10 00:42:46.298244 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:46.298255 | orchestrator | Tuesday 10 March 2026 00:42:43 +0000 (0:00:00.331) 0:00:35.412 ********* 2026-03-10 00:42:46.298266 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:42:46.298277 | orchestrator | 2026-03-10 00:42:46.298287 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:46.298298 | orchestrator | Tuesday 10 March 2026 00:42:43 +0000 (0:00:00.226) 0:00:35.639 ********* 2026-03-10 00:42:46.298309 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:42:46.298320 | orchestrator | 2026-03-10 00:42:46.298331 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:46.298342 | orchestrator | Tuesday 10 March 2026 00:42:44 +0000 (0:00:00.742) 0:00:36.382 ********* 2026-03-10 00:42:46.298353 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:42:46.298363 | orchestrator | 2026-03-10 00:42:46.298374 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:46.298385 | orchestrator | Tuesday 10 March 2026 00:42:44 +0000 (0:00:00.222) 0:00:36.605 ********* 2026-03-10 00:42:46.298395 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:42:46.298406 | orchestrator | 2026-03-10 00:42:46.298417 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:46.298464 | orchestrator | Tuesday 10 March 2026 00:42:44 +0000 (0:00:00.209) 0:00:36.814 ********* 2026-03-10 00:42:46.298477 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-10 00:42:46.298489 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-10 00:42:46.298500 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-10 00:42:46.298511 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-10 00:42:46.298522 | orchestrator | 2026-03-10 00:42:46.298533 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:46.298550 | orchestrator | Tuesday 10 March 2026 00:42:45 +0000 (0:00:00.678) 0:00:37.493 ********* 2026-03-10 00:42:46.298568 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:42:46.298586 | orchestrator | 2026-03-10 00:42:46.298617 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:46.298629 | orchestrator | Tuesday 10 March 2026 00:42:45 +0000 (0:00:00.256) 0:00:37.749 ********* 2026-03-10 00:42:46.298640 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:42:46.298650 | orchestrator | 2026-03-10 00:42:46.298661 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:46.298672 | orchestrator | Tuesday 10 March 2026 00:42:45 +0000 (0:00:00.219) 0:00:37.968 ********* 2026-03-10 00:42:46.298683 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:42:46.298694 | orchestrator | 2026-03-10 00:42:46.298705 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:42:46.298716 | orchestrator | Tuesday 10 March 2026 00:42:46 +0000 (0:00:00.209) 0:00:38.177 ********* 2026-03-10 00:42:46.298727 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:42:46.298743 | orchestrator | 2026-03-10 00:42:46.298774 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-10 00:42:50.441284 | orchestrator | Tuesday 10 March 2026 00:42:46 +0000 (0:00:00.195) 0:00:38.373 ********* 2026-03-10 00:42:50.441381 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-10 00:42:50.441393 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-10 00:42:50.441402 | orchestrator | 2026-03-10 00:42:50.441411 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-10 00:42:50.441419 | orchestrator | Tuesday 10 March 2026 00:42:46 +0000 (0:00:00.160) 0:00:38.534 ********* 2026-03-10 00:42:50.441477 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:42:50.441487 | orchestrator | 2026-03-10 00:42:50.441495 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-10 00:42:50.441502 | orchestrator | Tuesday 10 March 2026 00:42:46 +0000 (0:00:00.151) 0:00:38.686 ********* 2026-03-10 00:42:50.441511 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:42:50.441518 | orchestrator | 2026-03-10 00:42:50.441527 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-10 00:42:50.441535 | orchestrator | Tuesday 10 March 2026 00:42:46 +0000 (0:00:00.107) 0:00:38.793 ********* 2026-03-10 00:42:50.441543 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:42:50.441550 | orchestrator | 2026-03-10 00:42:50.441558 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-10 00:42:50.441566 | orchestrator | Tuesday 10 March 2026 00:42:46 +0000 (0:00:00.242) 0:00:39.035 ********* 2026-03-10 00:42:50.441573 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:42:50.441582 | orchestrator | 2026-03-10 00:42:50.441590 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-10 00:42:50.441598 | orchestrator | Tuesday 10 March 2026 00:42:47 +0000 (0:00:00.122) 0:00:39.158 ********* 2026-03-10 00:42:50.441607 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c0742eba-6300-5cfa-b498-a3704e14c384'}}) 2026-03-10 00:42:50.441616 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '45abfd4e-fefd-5ba8-aea8-e55d74ffeda2'}}) 2026-03-10 00:42:50.441623 | orchestrator | 2026-03-10 00:42:50.441631 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-10 00:42:50.441639 | orchestrator | Tuesday 10 March 2026 00:42:47 +0000 (0:00:00.163) 0:00:39.322 ********* 2026-03-10 00:42:50.441646 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c0742eba-6300-5cfa-b498-a3704e14c384'}})  2026-03-10 00:42:50.441656 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '45abfd4e-fefd-5ba8-aea8-e55d74ffeda2'}})  2026-03-10 00:42:50.441663 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:42:50.441671 | orchestrator | 2026-03-10 00:42:50.441678 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-10 00:42:50.441686 | orchestrator | Tuesday 10 March 2026 00:42:47 +0000 (0:00:00.138) 0:00:39.460 ********* 2026-03-10 00:42:50.441694 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c0742eba-6300-5cfa-b498-a3704e14c384'}})  2026-03-10 00:42:50.441725 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '45abfd4e-fefd-5ba8-aea8-e55d74ffeda2'}})  2026-03-10 00:42:50.441733 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:42:50.441740 | orchestrator | 2026-03-10 00:42:50.441748 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-10 00:42:50.441755 | orchestrator | Tuesday 10 March 2026 00:42:47 +0000 (0:00:00.125) 0:00:39.586 ********* 2026-03-10 00:42:50.441763 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c0742eba-6300-5cfa-b498-a3704e14c384'}})  2026-03-10 00:42:50.441771 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '45abfd4e-fefd-5ba8-aea8-e55d74ffeda2'}})  2026-03-10 00:42:50.441778 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:42:50.441786 | orchestrator | 2026-03-10 00:42:50.441795 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-10 00:42:50.441802 | orchestrator | Tuesday 10 March 2026 00:42:47 +0000 (0:00:00.128) 0:00:39.714 ********* 2026-03-10 00:42:50.441809 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:42:50.441817 | orchestrator | 2026-03-10 00:42:50.441825 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-10 00:42:50.441833 | orchestrator | Tuesday 10 March 2026 00:42:47 +0000 (0:00:00.114) 0:00:39.829 ********* 2026-03-10 00:42:50.441841 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:42:50.441848 | orchestrator | 2026-03-10 00:42:50.441872 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-10 00:42:50.441881 | orchestrator | Tuesday 10 March 2026 00:42:47 +0000 (0:00:00.121) 0:00:39.950 ********* 2026-03-10 00:42:50.441889 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:42:50.441896 | orchestrator | 2026-03-10 00:42:50.441904 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-10 00:42:50.441912 | orchestrator | Tuesday 10 March 2026 00:42:47 +0000 (0:00:00.116) 0:00:40.066 ********* 2026-03-10 00:42:50.441919 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:42:50.441927 | orchestrator | 2026-03-10 00:42:50.441935 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-10 00:42:50.441942 | orchestrator | Tuesday 10 March 2026 00:42:48 +0000 (0:00:00.157) 0:00:40.224 ********* 2026-03-10 00:42:50.441950 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:42:50.441959 | orchestrator | 2026-03-10 00:42:50.441967 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-10 00:42:50.441975 | orchestrator | Tuesday 10 March 2026 00:42:48 +0000 (0:00:00.168) 0:00:40.392 ********* 2026-03-10 00:42:50.441983 | orchestrator | ok: [testbed-node-5] => { 2026-03-10 00:42:50.441991 | orchestrator |  "ceph_osd_devices": { 2026-03-10 00:42:50.441998 | orchestrator |  "sdb": { 2026-03-10 00:42:50.442067 | orchestrator |  "osd_lvm_uuid": "c0742eba-6300-5cfa-b498-a3704e14c384" 2026-03-10 00:42:50.442077 | orchestrator |  }, 2026-03-10 00:42:50.442085 | orchestrator |  "sdc": { 2026-03-10 00:42:50.442093 | orchestrator |  "osd_lvm_uuid": "45abfd4e-fefd-5ba8-aea8-e55d74ffeda2" 2026-03-10 00:42:50.442102 | orchestrator |  } 2026-03-10 00:42:50.442110 | orchestrator |  } 2026-03-10 00:42:50.442118 | orchestrator | } 2026-03-10 00:42:50.442127 | orchestrator | 2026-03-10 00:42:50.442135 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-10 00:42:50.442143 | orchestrator | Tuesday 10 March 2026 00:42:48 +0000 (0:00:00.168) 0:00:40.561 ********* 2026-03-10 00:42:50.442151 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:42:50.442159 | orchestrator | 2026-03-10 00:42:50.442167 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-10 00:42:50.442175 | orchestrator | Tuesday 10 March 2026 00:42:48 +0000 (0:00:00.358) 0:00:40.920 ********* 2026-03-10 00:42:50.442183 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:42:50.442199 | orchestrator | 2026-03-10 00:42:50.442207 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-10 00:42:50.442215 | orchestrator | Tuesday 10 March 2026 00:42:48 +0000 (0:00:00.143) 0:00:41.063 ********* 2026-03-10 00:42:50.442223 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:42:50.442231 | orchestrator | 2026-03-10 00:42:50.442239 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-10 00:42:50.442248 | orchestrator | Tuesday 10 March 2026 00:42:49 +0000 (0:00:00.160) 0:00:41.224 ********* 2026-03-10 00:42:50.442256 | orchestrator | changed: [testbed-node-5] => { 2026-03-10 00:42:50.442264 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-10 00:42:50.442272 | orchestrator |  "ceph_osd_devices": { 2026-03-10 00:42:50.442281 | orchestrator |  "sdb": { 2026-03-10 00:42:50.442289 | orchestrator |  "osd_lvm_uuid": "c0742eba-6300-5cfa-b498-a3704e14c384" 2026-03-10 00:42:50.442297 | orchestrator |  }, 2026-03-10 00:42:50.442305 | orchestrator |  "sdc": { 2026-03-10 00:42:50.442313 | orchestrator |  "osd_lvm_uuid": "45abfd4e-fefd-5ba8-aea8-e55d74ffeda2" 2026-03-10 00:42:50.442321 | orchestrator |  } 2026-03-10 00:42:50.442329 | orchestrator |  }, 2026-03-10 00:42:50.442337 | orchestrator |  "lvm_volumes": [ 2026-03-10 00:42:50.442346 | orchestrator |  { 2026-03-10 00:42:50.442354 | orchestrator |  "data": "osd-block-c0742eba-6300-5cfa-b498-a3704e14c384", 2026-03-10 00:42:50.442362 | orchestrator |  "data_vg": "ceph-c0742eba-6300-5cfa-b498-a3704e14c384" 2026-03-10 00:42:50.442370 | orchestrator |  }, 2026-03-10 00:42:50.442378 | orchestrator |  { 2026-03-10 00:42:50.442387 | orchestrator |  "data": "osd-block-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2", 2026-03-10 00:42:50.442399 | orchestrator |  "data_vg": "ceph-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2" 2026-03-10 00:42:50.442407 | orchestrator |  } 2026-03-10 00:42:50.442415 | orchestrator |  ] 2026-03-10 00:42:50.442443 | orchestrator |  } 2026-03-10 00:42:50.442452 | orchestrator | } 2026-03-10 00:42:50.442460 | orchestrator | 2026-03-10 00:42:50.442467 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-10 00:42:50.442474 | orchestrator | Tuesday 10 March 2026 00:42:49 +0000 (0:00:00.221) 0:00:41.445 ********* 2026-03-10 00:42:50.442481 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-10 00:42:50.442488 | orchestrator | 2026-03-10 00:42:50.442497 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:42:50.442505 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-10 00:42:50.442513 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-10 00:42:50.442521 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-10 00:42:50.442529 | orchestrator | 2026-03-10 00:42:50.442537 | orchestrator | 2026-03-10 00:42:50.442545 | orchestrator | 2026-03-10 00:42:50.442552 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:42:50.442559 | orchestrator | Tuesday 10 March 2026 00:42:50 +0000 (0:00:01.052) 0:00:42.498 ********* 2026-03-10 00:42:50.442567 | orchestrator | =============================================================================== 2026-03-10 00:42:50.442574 | orchestrator | Write configuration file ------------------------------------------------ 3.94s 2026-03-10 00:42:50.442581 | orchestrator | Add known links to the list of available block devices ------------------ 1.29s 2026-03-10 00:42:50.442589 | orchestrator | Add known partitions to the list of available block devices ------------- 1.20s 2026-03-10 00:42:50.442596 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.13s 2026-03-10 00:42:50.442610 | orchestrator | Add known links to the list of available block devices ------------------ 0.95s 2026-03-10 00:42:50.442618 | orchestrator | Add known links to the list of available block devices ------------------ 0.95s 2026-03-10 00:42:50.442626 | orchestrator | Add known partitions to the list of available block devices ------------- 0.88s 2026-03-10 00:42:50.442634 | orchestrator | Add known partitions to the list of available block devices ------------- 0.87s 2026-03-10 00:42:50.442641 | orchestrator | Get initial list of available block devices ----------------------------- 0.80s 2026-03-10 00:42:50.442649 | orchestrator | Print configuration data ------------------------------------------------ 0.79s 2026-03-10 00:42:50.442656 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2026-03-10 00:42:50.442664 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2026-03-10 00:42:50.442671 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2026-03-10 00:42:50.442684 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2026-03-10 00:42:50.853407 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2026-03-10 00:42:50.853547 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2026-03-10 00:42:50.853560 | orchestrator | Print WAL devices ------------------------------------------------------- 0.64s 2026-03-10 00:42:50.853565 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.60s 2026-03-10 00:42:50.853571 | orchestrator | Add known partitions to the list of available block devices ------------- 0.58s 2026-03-10 00:42:50.853577 | orchestrator | Set DB devices config data ---------------------------------------------- 0.53s 2026-03-10 00:43:13.771332 | orchestrator | 2026-03-10 00:43:13 | INFO  | Task 9868beac-ac52-4ec2-9f06-cff3280aca25 (sync inventory) is running in background. Output coming soon. 2026-03-10 00:43:42.881179 | orchestrator | 2026-03-10 00:43:15 | INFO  | Starting group_vars file reorganization 2026-03-10 00:43:42.881273 | orchestrator | 2026-03-10 00:43:15 | INFO  | Moved 0 file(s) to their respective directories 2026-03-10 00:43:42.881284 | orchestrator | 2026-03-10 00:43:15 | INFO  | Group_vars file reorganization completed 2026-03-10 00:43:42.881291 | orchestrator | 2026-03-10 00:43:17 | INFO  | Starting variable preparation from inventory 2026-03-10 00:43:42.881299 | orchestrator | 2026-03-10 00:43:21 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-10 00:43:42.881306 | orchestrator | 2026-03-10 00:43:21 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-10 00:43:42.881312 | orchestrator | 2026-03-10 00:43:21 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-10 00:43:42.881319 | orchestrator | 2026-03-10 00:43:21 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-10 00:43:42.881326 | orchestrator | 2026-03-10 00:43:21 | INFO  | Variable preparation completed 2026-03-10 00:43:42.881333 | orchestrator | 2026-03-10 00:43:22 | INFO  | Starting inventory overwrite handling 2026-03-10 00:43:42.881340 | orchestrator | 2026-03-10 00:43:22 | INFO  | Handling group overwrites in 99-overwrite 2026-03-10 00:43:42.881346 | orchestrator | 2026-03-10 00:43:22 | INFO  | Removing group frr:children from 60-generic 2026-03-10 00:43:42.881353 | orchestrator | 2026-03-10 00:43:22 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-10 00:43:42.881377 | orchestrator | 2026-03-10 00:43:22 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-10 00:43:42.881385 | orchestrator | 2026-03-10 00:43:22 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-10 00:43:42.881391 | orchestrator | 2026-03-10 00:43:22 | INFO  | Handling group overwrites in 20-roles 2026-03-10 00:43:42.881398 | orchestrator | 2026-03-10 00:43:23 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-10 00:43:42.881470 | orchestrator | 2026-03-10 00:43:23 | INFO  | Removed 5 group(s) in total 2026-03-10 00:43:42.881477 | orchestrator | 2026-03-10 00:43:23 | INFO  | Inventory overwrite handling completed 2026-03-10 00:43:42.881483 | orchestrator | 2026-03-10 00:43:24 | INFO  | Starting merge of inventory files 2026-03-10 00:43:42.881489 | orchestrator | 2026-03-10 00:43:24 | INFO  | Inventory files merged successfully 2026-03-10 00:43:42.881496 | orchestrator | 2026-03-10 00:43:29 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-10 00:43:42.881502 | orchestrator | 2026-03-10 00:43:41 | INFO  | Successfully wrote ClusterShell configuration 2026-03-10 00:43:42.881508 | orchestrator | [master 0c493ba] 2026-03-10-00-43 2026-03-10 00:43:42.881516 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-03-10 00:43:45.317030 | orchestrator | 2026-03-10 00:43:45 | INFO  | Task 899d7601-b42d-4710-bcbf-0ad98da9f2d6 (ceph-create-lvm-devices) was prepared for execution. 2026-03-10 00:43:45.317122 | orchestrator | 2026-03-10 00:43:45 | INFO  | It takes a moment until task 899d7601-b42d-4710-bcbf-0ad98da9f2d6 (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-10 00:43:56.922964 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-10 00:43:56.923096 | orchestrator | 2.16.14 2026-03-10 00:43:56.923116 | orchestrator | 2026-03-10 00:43:56.923128 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-10 00:43:56.923140 | orchestrator | 2026-03-10 00:43:56.923152 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-10 00:43:56.923164 | orchestrator | Tuesday 10 March 2026 00:43:49 +0000 (0:00:00.270) 0:00:00.270 ********* 2026-03-10 00:43:56.923175 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-10 00:43:56.923187 | orchestrator | 2026-03-10 00:43:56.923198 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-10 00:43:56.923209 | orchestrator | Tuesday 10 March 2026 00:43:49 +0000 (0:00:00.217) 0:00:00.487 ********* 2026-03-10 00:43:56.923220 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:43:56.923231 | orchestrator | 2026-03-10 00:43:56.923242 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:43:56.923254 | orchestrator | Tuesday 10 March 2026 00:43:49 +0000 (0:00:00.208) 0:00:00.696 ********* 2026-03-10 00:43:56.923265 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-10 00:43:56.923276 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-10 00:43:56.923287 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-10 00:43:56.923298 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-10 00:43:56.923309 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-10 00:43:56.923320 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-10 00:43:56.923331 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-10 00:43:56.923342 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-10 00:43:56.923353 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-10 00:43:56.923364 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-10 00:43:56.923375 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-10 00:43:56.923386 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-10 00:43:56.923446 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-10 00:43:56.923485 | orchestrator | 2026-03-10 00:43:56.923498 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:43:56.923511 | orchestrator | Tuesday 10 March 2026 00:43:50 +0000 (0:00:00.479) 0:00:01.175 ********* 2026-03-10 00:43:56.923523 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:43:56.923535 | orchestrator | 2026-03-10 00:43:56.923548 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:43:56.923560 | orchestrator | Tuesday 10 March 2026 00:43:50 +0000 (0:00:00.183) 0:00:01.358 ********* 2026-03-10 00:43:56.923572 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:43:56.923585 | orchestrator | 2026-03-10 00:43:56.923598 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:43:56.923611 | orchestrator | Tuesday 10 March 2026 00:43:50 +0000 (0:00:00.182) 0:00:01.541 ********* 2026-03-10 00:43:56.923623 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:43:56.923635 | orchestrator | 2026-03-10 00:43:56.923647 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:43:56.923660 | orchestrator | Tuesday 10 March 2026 00:43:50 +0000 (0:00:00.161) 0:00:01.702 ********* 2026-03-10 00:43:56.923672 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:43:56.923684 | orchestrator | 2026-03-10 00:43:56.923697 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:43:56.923709 | orchestrator | Tuesday 10 March 2026 00:43:51 +0000 (0:00:00.202) 0:00:01.905 ********* 2026-03-10 00:43:56.923721 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:43:56.923734 | orchestrator | 2026-03-10 00:43:56.923746 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:43:56.923759 | orchestrator | Tuesday 10 March 2026 00:43:51 +0000 (0:00:00.196) 0:00:02.102 ********* 2026-03-10 00:43:56.923771 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:43:56.923783 | orchestrator | 2026-03-10 00:43:56.923795 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:43:56.923807 | orchestrator | Tuesday 10 March 2026 00:43:51 +0000 (0:00:00.179) 0:00:02.281 ********* 2026-03-10 00:43:56.923819 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:43:56.923831 | orchestrator | 2026-03-10 00:43:56.923842 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:43:56.923853 | orchestrator | Tuesday 10 March 2026 00:43:51 +0000 (0:00:00.189) 0:00:02.470 ********* 2026-03-10 00:43:56.923864 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:43:56.923875 | orchestrator | 2026-03-10 00:43:56.923886 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:43:56.923897 | orchestrator | Tuesday 10 March 2026 00:43:51 +0000 (0:00:00.211) 0:00:02.682 ********* 2026-03-10 00:43:56.923908 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf) 2026-03-10 00:43:56.923920 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf) 2026-03-10 00:43:56.923931 | orchestrator | 2026-03-10 00:43:56.923942 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:43:56.923973 | orchestrator | Tuesday 10 March 2026 00:43:52 +0000 (0:00:00.429) 0:00:03.112 ********* 2026-03-10 00:43:56.923985 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a252bbef-4467-4af4-a387-4994b1c9e49a) 2026-03-10 00:43:56.923996 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a252bbef-4467-4af4-a387-4994b1c9e49a) 2026-03-10 00:43:56.924007 | orchestrator | 2026-03-10 00:43:56.924018 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:43:56.924029 | orchestrator | Tuesday 10 March 2026 00:43:53 +0000 (0:00:00.704) 0:00:03.816 ********* 2026-03-10 00:43:56.924039 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f86d111d-1a96-4282-a6fb-aea85f8e4c5d) 2026-03-10 00:43:56.924050 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f86d111d-1a96-4282-a6fb-aea85f8e4c5d) 2026-03-10 00:43:56.924069 | orchestrator | 2026-03-10 00:43:56.924080 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:43:56.924091 | orchestrator | Tuesday 10 March 2026 00:43:53 +0000 (0:00:00.743) 0:00:04.560 ********* 2026-03-10 00:43:56.924102 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0c217fde-a42a-4606-a0be-96745b6d50a1) 2026-03-10 00:43:56.924113 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0c217fde-a42a-4606-a0be-96745b6d50a1) 2026-03-10 00:43:56.924124 | orchestrator | 2026-03-10 00:43:56.924135 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:43:56.924145 | orchestrator | Tuesday 10 March 2026 00:43:54 +0000 (0:00:00.894) 0:00:05.454 ********* 2026-03-10 00:43:56.924156 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-10 00:43:56.924167 | orchestrator | 2026-03-10 00:43:56.924178 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:43:56.924189 | orchestrator | Tuesday 10 March 2026 00:43:55 +0000 (0:00:00.359) 0:00:05.814 ********* 2026-03-10 00:43:56.924199 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-10 00:43:56.924210 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-10 00:43:56.924221 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-10 00:43:56.924232 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-10 00:43:56.924243 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-10 00:43:56.924253 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-10 00:43:56.924264 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-10 00:43:56.924275 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-10 00:43:56.924286 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-10 00:43:56.924297 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-10 00:43:56.924308 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-10 00:43:56.924341 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-10 00:43:56.924353 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-10 00:43:56.924364 | orchestrator | 2026-03-10 00:43:56.924375 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:43:56.924386 | orchestrator | Tuesday 10 March 2026 00:43:55 +0000 (0:00:00.464) 0:00:06.278 ********* 2026-03-10 00:43:56.924430 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:43:56.924442 | orchestrator | 2026-03-10 00:43:56.924453 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:43:56.924464 | orchestrator | Tuesday 10 March 2026 00:43:55 +0000 (0:00:00.214) 0:00:06.492 ********* 2026-03-10 00:43:56.924474 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:43:56.924485 | orchestrator | 2026-03-10 00:43:56.924496 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:43:56.924507 | orchestrator | Tuesday 10 March 2026 00:43:55 +0000 (0:00:00.187) 0:00:06.680 ********* 2026-03-10 00:43:56.924518 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:43:56.924529 | orchestrator | 2026-03-10 00:43:56.924539 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:43:56.924550 | orchestrator | Tuesday 10 March 2026 00:43:56 +0000 (0:00:00.199) 0:00:06.880 ********* 2026-03-10 00:43:56.924561 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:43:56.924579 | orchestrator | 2026-03-10 00:43:56.924590 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:43:56.924600 | orchestrator | Tuesday 10 March 2026 00:43:56 +0000 (0:00:00.201) 0:00:07.081 ********* 2026-03-10 00:43:56.924611 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:43:56.924622 | orchestrator | 2026-03-10 00:43:56.924633 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:43:56.924643 | orchestrator | Tuesday 10 March 2026 00:43:56 +0000 (0:00:00.206) 0:00:07.287 ********* 2026-03-10 00:43:56.924654 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:43:56.924665 | orchestrator | 2026-03-10 00:43:56.924676 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:43:56.924686 | orchestrator | Tuesday 10 March 2026 00:43:56 +0000 (0:00:00.190) 0:00:07.478 ********* 2026-03-10 00:43:56.924697 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:43:56.924708 | orchestrator | 2026-03-10 00:43:56.924725 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:05.668423 | orchestrator | Tuesday 10 March 2026 00:43:56 +0000 (0:00:00.199) 0:00:07.677 ********* 2026-03-10 00:44:05.668501 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:05.668509 | orchestrator | 2026-03-10 00:44:05.668515 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:05.668521 | orchestrator | Tuesday 10 March 2026 00:43:57 +0000 (0:00:00.219) 0:00:07.897 ********* 2026-03-10 00:44:05.668526 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-10 00:44:05.668532 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-10 00:44:05.668538 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-10 00:44:05.668543 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-10 00:44:05.668548 | orchestrator | 2026-03-10 00:44:05.668553 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:05.668558 | orchestrator | Tuesday 10 March 2026 00:43:58 +0000 (0:00:01.116) 0:00:09.013 ********* 2026-03-10 00:44:05.668563 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:05.668568 | orchestrator | 2026-03-10 00:44:05.668573 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:05.668578 | orchestrator | Tuesday 10 March 2026 00:43:58 +0000 (0:00:00.224) 0:00:09.237 ********* 2026-03-10 00:44:05.668583 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:05.668588 | orchestrator | 2026-03-10 00:44:05.668593 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:05.668598 | orchestrator | Tuesday 10 March 2026 00:43:58 +0000 (0:00:00.226) 0:00:09.463 ********* 2026-03-10 00:44:05.668603 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:05.668608 | orchestrator | 2026-03-10 00:44:05.668613 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:05.668618 | orchestrator | Tuesday 10 March 2026 00:43:58 +0000 (0:00:00.215) 0:00:09.679 ********* 2026-03-10 00:44:05.668623 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:05.668628 | orchestrator | 2026-03-10 00:44:05.668633 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-10 00:44:05.668638 | orchestrator | Tuesday 10 March 2026 00:43:59 +0000 (0:00:00.237) 0:00:09.917 ********* 2026-03-10 00:44:05.668642 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:05.668647 | orchestrator | 2026-03-10 00:44:05.668652 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-10 00:44:05.668657 | orchestrator | Tuesday 10 March 2026 00:43:59 +0000 (0:00:00.158) 0:00:10.076 ********* 2026-03-10 00:44:05.668663 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '120d91ae-c06d-5ca9-b450-85f2d491e96a'}}) 2026-03-10 00:44:05.668668 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '07a8a029-b5c8-5530-8cc4-5b47064bbf55'}}) 2026-03-10 00:44:05.668673 | orchestrator | 2026-03-10 00:44:05.668678 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-10 00:44:05.668700 | orchestrator | Tuesday 10 March 2026 00:43:59 +0000 (0:00:00.241) 0:00:10.317 ********* 2026-03-10 00:44:05.668706 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-120d91ae-c06d-5ca9-b450-85f2d491e96a', 'data_vg': 'ceph-120d91ae-c06d-5ca9-b450-85f2d491e96a'}) 2026-03-10 00:44:05.668713 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-07a8a029-b5c8-5530-8cc4-5b47064bbf55', 'data_vg': 'ceph-07a8a029-b5c8-5530-8cc4-5b47064bbf55'}) 2026-03-10 00:44:05.668717 | orchestrator | 2026-03-10 00:44:05.668722 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-10 00:44:05.668738 | orchestrator | Tuesday 10 March 2026 00:44:01 +0000 (0:00:02.098) 0:00:12.415 ********* 2026-03-10 00:44:05.668744 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-120d91ae-c06d-5ca9-b450-85f2d491e96a', 'data_vg': 'ceph-120d91ae-c06d-5ca9-b450-85f2d491e96a'})  2026-03-10 00:44:05.668750 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07a8a029-b5c8-5530-8cc4-5b47064bbf55', 'data_vg': 'ceph-07a8a029-b5c8-5530-8cc4-5b47064bbf55'})  2026-03-10 00:44:05.668755 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:05.668760 | orchestrator | 2026-03-10 00:44:05.668765 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-10 00:44:05.668770 | orchestrator | Tuesday 10 March 2026 00:44:01 +0000 (0:00:00.166) 0:00:12.582 ********* 2026-03-10 00:44:05.668775 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-120d91ae-c06d-5ca9-b450-85f2d491e96a', 'data_vg': 'ceph-120d91ae-c06d-5ca9-b450-85f2d491e96a'}) 2026-03-10 00:44:05.668780 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-07a8a029-b5c8-5530-8cc4-5b47064bbf55', 'data_vg': 'ceph-07a8a029-b5c8-5530-8cc4-5b47064bbf55'}) 2026-03-10 00:44:05.668784 | orchestrator | 2026-03-10 00:44:05.668789 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-10 00:44:05.668795 | orchestrator | Tuesday 10 March 2026 00:44:03 +0000 (0:00:01.585) 0:00:14.167 ********* 2026-03-10 00:44:05.668800 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-120d91ae-c06d-5ca9-b450-85f2d491e96a', 'data_vg': 'ceph-120d91ae-c06d-5ca9-b450-85f2d491e96a'})  2026-03-10 00:44:05.668805 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07a8a029-b5c8-5530-8cc4-5b47064bbf55', 'data_vg': 'ceph-07a8a029-b5c8-5530-8cc4-5b47064bbf55'})  2026-03-10 00:44:05.668809 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:05.668814 | orchestrator | 2026-03-10 00:44:05.668819 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-10 00:44:05.668824 | orchestrator | Tuesday 10 March 2026 00:44:03 +0000 (0:00:00.161) 0:00:14.329 ********* 2026-03-10 00:44:05.668839 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:05.668844 | orchestrator | 2026-03-10 00:44:05.668849 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-10 00:44:05.668854 | orchestrator | Tuesday 10 March 2026 00:44:03 +0000 (0:00:00.158) 0:00:14.488 ********* 2026-03-10 00:44:05.668859 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-120d91ae-c06d-5ca9-b450-85f2d491e96a', 'data_vg': 'ceph-120d91ae-c06d-5ca9-b450-85f2d491e96a'})  2026-03-10 00:44:05.668864 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07a8a029-b5c8-5530-8cc4-5b47064bbf55', 'data_vg': 'ceph-07a8a029-b5c8-5530-8cc4-5b47064bbf55'})  2026-03-10 00:44:05.668868 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:05.668873 | orchestrator | 2026-03-10 00:44:05.668878 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-10 00:44:05.668883 | orchestrator | Tuesday 10 March 2026 00:44:04 +0000 (0:00:00.450) 0:00:14.939 ********* 2026-03-10 00:44:05.668888 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:05.668892 | orchestrator | 2026-03-10 00:44:05.668897 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-10 00:44:05.668902 | orchestrator | Tuesday 10 March 2026 00:44:04 +0000 (0:00:00.159) 0:00:15.099 ********* 2026-03-10 00:44:05.668911 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-120d91ae-c06d-5ca9-b450-85f2d491e96a', 'data_vg': 'ceph-120d91ae-c06d-5ca9-b450-85f2d491e96a'})  2026-03-10 00:44:05.668916 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07a8a029-b5c8-5530-8cc4-5b47064bbf55', 'data_vg': 'ceph-07a8a029-b5c8-5530-8cc4-5b47064bbf55'})  2026-03-10 00:44:05.668921 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:05.668926 | orchestrator | 2026-03-10 00:44:05.668931 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-10 00:44:05.668936 | orchestrator | Tuesday 10 March 2026 00:44:04 +0000 (0:00:00.180) 0:00:15.279 ********* 2026-03-10 00:44:05.668941 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:05.668945 | orchestrator | 2026-03-10 00:44:05.668950 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-10 00:44:05.668955 | orchestrator | Tuesday 10 March 2026 00:44:04 +0000 (0:00:00.152) 0:00:15.432 ********* 2026-03-10 00:44:05.668960 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-120d91ae-c06d-5ca9-b450-85f2d491e96a', 'data_vg': 'ceph-120d91ae-c06d-5ca9-b450-85f2d491e96a'})  2026-03-10 00:44:05.668965 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07a8a029-b5c8-5530-8cc4-5b47064bbf55', 'data_vg': 'ceph-07a8a029-b5c8-5530-8cc4-5b47064bbf55'})  2026-03-10 00:44:05.668970 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:05.668974 | orchestrator | 2026-03-10 00:44:05.668979 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-10 00:44:05.668984 | orchestrator | Tuesday 10 March 2026 00:44:04 +0000 (0:00:00.169) 0:00:15.601 ********* 2026-03-10 00:44:05.668989 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:44:05.668994 | orchestrator | 2026-03-10 00:44:05.668999 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-10 00:44:05.669004 | orchestrator | Tuesday 10 March 2026 00:44:04 +0000 (0:00:00.142) 0:00:15.744 ********* 2026-03-10 00:44:05.669009 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-120d91ae-c06d-5ca9-b450-85f2d491e96a', 'data_vg': 'ceph-120d91ae-c06d-5ca9-b450-85f2d491e96a'})  2026-03-10 00:44:05.669014 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07a8a029-b5c8-5530-8cc4-5b47064bbf55', 'data_vg': 'ceph-07a8a029-b5c8-5530-8cc4-5b47064bbf55'})  2026-03-10 00:44:05.669019 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:05.669023 | orchestrator | 2026-03-10 00:44:05.669028 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-10 00:44:05.669033 | orchestrator | Tuesday 10 March 2026 00:44:05 +0000 (0:00:00.174) 0:00:15.918 ********* 2026-03-10 00:44:05.669038 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-120d91ae-c06d-5ca9-b450-85f2d491e96a', 'data_vg': 'ceph-120d91ae-c06d-5ca9-b450-85f2d491e96a'})  2026-03-10 00:44:05.669048 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07a8a029-b5c8-5530-8cc4-5b47064bbf55', 'data_vg': 'ceph-07a8a029-b5c8-5530-8cc4-5b47064bbf55'})  2026-03-10 00:44:05.669053 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:05.669058 | orchestrator | 2026-03-10 00:44:05.669063 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-10 00:44:05.669068 | orchestrator | Tuesday 10 March 2026 00:44:05 +0000 (0:00:00.191) 0:00:16.110 ********* 2026-03-10 00:44:05.669073 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-120d91ae-c06d-5ca9-b450-85f2d491e96a', 'data_vg': 'ceph-120d91ae-c06d-5ca9-b450-85f2d491e96a'})  2026-03-10 00:44:05.669078 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07a8a029-b5c8-5530-8cc4-5b47064bbf55', 'data_vg': 'ceph-07a8a029-b5c8-5530-8cc4-5b47064bbf55'})  2026-03-10 00:44:05.669083 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:05.669087 | orchestrator | 2026-03-10 00:44:05.669092 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-10 00:44:05.669097 | orchestrator | Tuesday 10 March 2026 00:44:05 +0000 (0:00:00.158) 0:00:16.269 ********* 2026-03-10 00:44:05.669105 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:05.669110 | orchestrator | 2026-03-10 00:44:05.669115 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-10 00:44:05.669123 | orchestrator | Tuesday 10 March 2026 00:44:05 +0000 (0:00:00.159) 0:00:16.428 ********* 2026-03-10 00:44:12.066536 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:12.066651 | orchestrator | 2026-03-10 00:44:12.066682 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-10 00:44:12.066705 | orchestrator | Tuesday 10 March 2026 00:44:05 +0000 (0:00:00.165) 0:00:16.594 ********* 2026-03-10 00:44:12.066726 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:12.066745 | orchestrator | 2026-03-10 00:44:12.066766 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-10 00:44:12.066786 | orchestrator | Tuesday 10 March 2026 00:44:05 +0000 (0:00:00.146) 0:00:16.740 ********* 2026-03-10 00:44:12.066807 | orchestrator | ok: [testbed-node-3] => { 2026-03-10 00:44:12.066823 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-10 00:44:12.066834 | orchestrator | } 2026-03-10 00:44:12.066845 | orchestrator | 2026-03-10 00:44:12.066856 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-10 00:44:12.066866 | orchestrator | Tuesday 10 March 2026 00:44:06 +0000 (0:00:00.370) 0:00:17.110 ********* 2026-03-10 00:44:12.066877 | orchestrator | ok: [testbed-node-3] => { 2026-03-10 00:44:12.066888 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-10 00:44:12.066898 | orchestrator | } 2026-03-10 00:44:12.066909 | orchestrator | 2026-03-10 00:44:12.066920 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-10 00:44:12.066933 | orchestrator | Tuesday 10 March 2026 00:44:06 +0000 (0:00:00.162) 0:00:17.273 ********* 2026-03-10 00:44:12.066945 | orchestrator | ok: [testbed-node-3] => { 2026-03-10 00:44:12.066959 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-10 00:44:12.066971 | orchestrator | } 2026-03-10 00:44:12.066984 | orchestrator | 2026-03-10 00:44:12.066997 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-10 00:44:12.067009 | orchestrator | Tuesday 10 March 2026 00:44:06 +0000 (0:00:00.177) 0:00:17.450 ********* 2026-03-10 00:44:12.067022 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:44:12.067034 | orchestrator | 2026-03-10 00:44:12.067047 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-10 00:44:12.067059 | orchestrator | Tuesday 10 March 2026 00:44:07 +0000 (0:00:00.693) 0:00:18.144 ********* 2026-03-10 00:44:12.067072 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:44:12.067084 | orchestrator | 2026-03-10 00:44:12.067097 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-10 00:44:12.067110 | orchestrator | Tuesday 10 March 2026 00:44:07 +0000 (0:00:00.572) 0:00:18.717 ********* 2026-03-10 00:44:12.067122 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:44:12.067134 | orchestrator | 2026-03-10 00:44:12.067146 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-10 00:44:12.067158 | orchestrator | Tuesday 10 March 2026 00:44:08 +0000 (0:00:00.546) 0:00:19.263 ********* 2026-03-10 00:44:12.067170 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:44:12.067183 | orchestrator | 2026-03-10 00:44:12.067195 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-10 00:44:12.067208 | orchestrator | Tuesday 10 March 2026 00:44:08 +0000 (0:00:00.127) 0:00:19.391 ********* 2026-03-10 00:44:12.067221 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:12.067233 | orchestrator | 2026-03-10 00:44:12.067246 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-10 00:44:12.067258 | orchestrator | Tuesday 10 March 2026 00:44:08 +0000 (0:00:00.112) 0:00:19.503 ********* 2026-03-10 00:44:12.067271 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:12.067283 | orchestrator | 2026-03-10 00:44:12.067293 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-10 00:44:12.067326 | orchestrator | Tuesday 10 March 2026 00:44:08 +0000 (0:00:00.139) 0:00:19.643 ********* 2026-03-10 00:44:12.067351 | orchestrator | ok: [testbed-node-3] => { 2026-03-10 00:44:12.067363 | orchestrator |  "vgs_report": { 2026-03-10 00:44:12.067374 | orchestrator |  "vg": [] 2026-03-10 00:44:12.067413 | orchestrator |  } 2026-03-10 00:44:12.067435 | orchestrator | } 2026-03-10 00:44:12.067455 | orchestrator | 2026-03-10 00:44:12.067475 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-10 00:44:12.067494 | orchestrator | Tuesday 10 March 2026 00:44:09 +0000 (0:00:00.128) 0:00:19.771 ********* 2026-03-10 00:44:12.067513 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:12.067534 | orchestrator | 2026-03-10 00:44:12.067556 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-10 00:44:12.067577 | orchestrator | Tuesday 10 March 2026 00:44:09 +0000 (0:00:00.119) 0:00:19.891 ********* 2026-03-10 00:44:12.067597 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:12.067617 | orchestrator | 2026-03-10 00:44:12.067637 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-10 00:44:12.067657 | orchestrator | Tuesday 10 March 2026 00:44:09 +0000 (0:00:00.121) 0:00:20.012 ********* 2026-03-10 00:44:12.067678 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:12.067698 | orchestrator | 2026-03-10 00:44:12.067715 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-10 00:44:12.067726 | orchestrator | Tuesday 10 March 2026 00:44:09 +0000 (0:00:00.318) 0:00:20.331 ********* 2026-03-10 00:44:12.067737 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:12.067747 | orchestrator | 2026-03-10 00:44:12.067758 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-10 00:44:12.067769 | orchestrator | Tuesday 10 March 2026 00:44:09 +0000 (0:00:00.163) 0:00:20.495 ********* 2026-03-10 00:44:12.067780 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:12.067790 | orchestrator | 2026-03-10 00:44:12.067801 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-10 00:44:12.067811 | orchestrator | Tuesday 10 March 2026 00:44:09 +0000 (0:00:00.127) 0:00:20.622 ********* 2026-03-10 00:44:12.067822 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:12.067833 | orchestrator | 2026-03-10 00:44:12.067843 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-10 00:44:12.067854 | orchestrator | Tuesday 10 March 2026 00:44:09 +0000 (0:00:00.124) 0:00:20.747 ********* 2026-03-10 00:44:12.067864 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:12.067875 | orchestrator | 2026-03-10 00:44:12.067886 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-10 00:44:12.067896 | orchestrator | Tuesday 10 March 2026 00:44:10 +0000 (0:00:00.143) 0:00:20.890 ********* 2026-03-10 00:44:12.067926 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:12.067937 | orchestrator | 2026-03-10 00:44:12.067948 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-10 00:44:12.067958 | orchestrator | Tuesday 10 March 2026 00:44:10 +0000 (0:00:00.129) 0:00:21.020 ********* 2026-03-10 00:44:12.067969 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:12.067980 | orchestrator | 2026-03-10 00:44:12.067990 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-10 00:44:12.068001 | orchestrator | Tuesday 10 March 2026 00:44:10 +0000 (0:00:00.123) 0:00:21.143 ********* 2026-03-10 00:44:12.068011 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:12.068022 | orchestrator | 2026-03-10 00:44:12.068033 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-10 00:44:12.068043 | orchestrator | Tuesday 10 March 2026 00:44:10 +0000 (0:00:00.142) 0:00:21.285 ********* 2026-03-10 00:44:12.068054 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:12.068065 | orchestrator | 2026-03-10 00:44:12.068075 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-10 00:44:12.068086 | orchestrator | Tuesday 10 March 2026 00:44:10 +0000 (0:00:00.140) 0:00:21.426 ********* 2026-03-10 00:44:12.068107 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:12.068118 | orchestrator | 2026-03-10 00:44:12.068129 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-10 00:44:12.068140 | orchestrator | Tuesday 10 March 2026 00:44:10 +0000 (0:00:00.133) 0:00:21.559 ********* 2026-03-10 00:44:12.068150 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:12.068161 | orchestrator | 2026-03-10 00:44:12.068172 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-10 00:44:12.068183 | orchestrator | Tuesday 10 March 2026 00:44:10 +0000 (0:00:00.154) 0:00:21.714 ********* 2026-03-10 00:44:12.068193 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:12.068204 | orchestrator | 2026-03-10 00:44:12.068215 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-10 00:44:12.068226 | orchestrator | Tuesday 10 March 2026 00:44:11 +0000 (0:00:00.111) 0:00:21.826 ********* 2026-03-10 00:44:12.068237 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-120d91ae-c06d-5ca9-b450-85f2d491e96a', 'data_vg': 'ceph-120d91ae-c06d-5ca9-b450-85f2d491e96a'})  2026-03-10 00:44:12.068250 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07a8a029-b5c8-5530-8cc4-5b47064bbf55', 'data_vg': 'ceph-07a8a029-b5c8-5530-8cc4-5b47064bbf55'})  2026-03-10 00:44:12.068260 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:12.068271 | orchestrator | 2026-03-10 00:44:12.068282 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-10 00:44:12.068293 | orchestrator | Tuesday 10 March 2026 00:44:11 +0000 (0:00:00.294) 0:00:22.120 ********* 2026-03-10 00:44:12.068304 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-120d91ae-c06d-5ca9-b450-85f2d491e96a', 'data_vg': 'ceph-120d91ae-c06d-5ca9-b450-85f2d491e96a'})  2026-03-10 00:44:12.068315 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07a8a029-b5c8-5530-8cc4-5b47064bbf55', 'data_vg': 'ceph-07a8a029-b5c8-5530-8cc4-5b47064bbf55'})  2026-03-10 00:44:12.068325 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:12.068336 | orchestrator | 2026-03-10 00:44:12.068347 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-10 00:44:12.068361 | orchestrator | Tuesday 10 March 2026 00:44:11 +0000 (0:00:00.137) 0:00:22.257 ********* 2026-03-10 00:44:12.068379 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-120d91ae-c06d-5ca9-b450-85f2d491e96a', 'data_vg': 'ceph-120d91ae-c06d-5ca9-b450-85f2d491e96a'})  2026-03-10 00:44:12.068418 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07a8a029-b5c8-5530-8cc4-5b47064bbf55', 'data_vg': 'ceph-07a8a029-b5c8-5530-8cc4-5b47064bbf55'})  2026-03-10 00:44:12.068431 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:12.068441 | orchestrator | 2026-03-10 00:44:12.068452 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-10 00:44:12.068463 | orchestrator | Tuesday 10 March 2026 00:44:11 +0000 (0:00:00.146) 0:00:22.404 ********* 2026-03-10 00:44:12.068474 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-120d91ae-c06d-5ca9-b450-85f2d491e96a', 'data_vg': 'ceph-120d91ae-c06d-5ca9-b450-85f2d491e96a'})  2026-03-10 00:44:12.068485 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07a8a029-b5c8-5530-8cc4-5b47064bbf55', 'data_vg': 'ceph-07a8a029-b5c8-5530-8cc4-5b47064bbf55'})  2026-03-10 00:44:12.068495 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:12.068506 | orchestrator | 2026-03-10 00:44:12.068517 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-10 00:44:12.068527 | orchestrator | Tuesday 10 March 2026 00:44:11 +0000 (0:00:00.139) 0:00:22.543 ********* 2026-03-10 00:44:12.068538 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-120d91ae-c06d-5ca9-b450-85f2d491e96a', 'data_vg': 'ceph-120d91ae-c06d-5ca9-b450-85f2d491e96a'})  2026-03-10 00:44:12.068549 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07a8a029-b5c8-5530-8cc4-5b47064bbf55', 'data_vg': 'ceph-07a8a029-b5c8-5530-8cc4-5b47064bbf55'})  2026-03-10 00:44:12.068566 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:12.068577 | orchestrator | 2026-03-10 00:44:12.068588 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-10 00:44:12.068599 | orchestrator | Tuesday 10 March 2026 00:44:11 +0000 (0:00:00.143) 0:00:22.687 ********* 2026-03-10 00:44:12.068617 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-120d91ae-c06d-5ca9-b450-85f2d491e96a', 'data_vg': 'ceph-120d91ae-c06d-5ca9-b450-85f2d491e96a'})  2026-03-10 00:44:17.463910 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07a8a029-b5c8-5530-8cc4-5b47064bbf55', 'data_vg': 'ceph-07a8a029-b5c8-5530-8cc4-5b47064bbf55'})  2026-03-10 00:44:17.464014 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:17.464030 | orchestrator | 2026-03-10 00:44:17.464043 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-10 00:44:17.464056 | orchestrator | Tuesday 10 March 2026 00:44:12 +0000 (0:00:00.141) 0:00:22.828 ********* 2026-03-10 00:44:17.464067 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-120d91ae-c06d-5ca9-b450-85f2d491e96a', 'data_vg': 'ceph-120d91ae-c06d-5ca9-b450-85f2d491e96a'})  2026-03-10 00:44:17.464078 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07a8a029-b5c8-5530-8cc4-5b47064bbf55', 'data_vg': 'ceph-07a8a029-b5c8-5530-8cc4-5b47064bbf55'})  2026-03-10 00:44:17.464089 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:17.464099 | orchestrator | 2026-03-10 00:44:17.464130 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-10 00:44:17.464141 | orchestrator | Tuesday 10 March 2026 00:44:12 +0000 (0:00:00.181) 0:00:23.010 ********* 2026-03-10 00:44:17.464153 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-120d91ae-c06d-5ca9-b450-85f2d491e96a', 'data_vg': 'ceph-120d91ae-c06d-5ca9-b450-85f2d491e96a'})  2026-03-10 00:44:17.464164 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07a8a029-b5c8-5530-8cc4-5b47064bbf55', 'data_vg': 'ceph-07a8a029-b5c8-5530-8cc4-5b47064bbf55'})  2026-03-10 00:44:17.464175 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:17.464186 | orchestrator | 2026-03-10 00:44:17.464197 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-10 00:44:17.464209 | orchestrator | Tuesday 10 March 2026 00:44:12 +0000 (0:00:00.142) 0:00:23.153 ********* 2026-03-10 00:44:17.464219 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:44:17.464231 | orchestrator | 2026-03-10 00:44:17.464242 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-10 00:44:17.464253 | orchestrator | Tuesday 10 March 2026 00:44:12 +0000 (0:00:00.484) 0:00:23.638 ********* 2026-03-10 00:44:17.464264 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:44:17.464275 | orchestrator | 2026-03-10 00:44:17.464286 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-10 00:44:17.464296 | orchestrator | Tuesday 10 March 2026 00:44:13 +0000 (0:00:00.527) 0:00:24.165 ********* 2026-03-10 00:44:17.464307 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:44:17.464317 | orchestrator | 2026-03-10 00:44:17.464328 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-10 00:44:17.464339 | orchestrator | Tuesday 10 March 2026 00:44:13 +0000 (0:00:00.148) 0:00:24.314 ********* 2026-03-10 00:44:17.464350 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-07a8a029-b5c8-5530-8cc4-5b47064bbf55', 'vg_name': 'ceph-07a8a029-b5c8-5530-8cc4-5b47064bbf55'}) 2026-03-10 00:44:17.464367 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-120d91ae-c06d-5ca9-b450-85f2d491e96a', 'vg_name': 'ceph-120d91ae-c06d-5ca9-b450-85f2d491e96a'}) 2026-03-10 00:44:17.464378 | orchestrator | 2026-03-10 00:44:17.464453 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-10 00:44:17.464467 | orchestrator | Tuesday 10 March 2026 00:44:13 +0000 (0:00:00.196) 0:00:24.510 ********* 2026-03-10 00:44:17.464479 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-120d91ae-c06d-5ca9-b450-85f2d491e96a', 'data_vg': 'ceph-120d91ae-c06d-5ca9-b450-85f2d491e96a'})  2026-03-10 00:44:17.464512 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07a8a029-b5c8-5530-8cc4-5b47064bbf55', 'data_vg': 'ceph-07a8a029-b5c8-5530-8cc4-5b47064bbf55'})  2026-03-10 00:44:17.464526 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:17.464538 | orchestrator | 2026-03-10 00:44:17.464550 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-10 00:44:17.464563 | orchestrator | Tuesday 10 March 2026 00:44:14 +0000 (0:00:00.384) 0:00:24.895 ********* 2026-03-10 00:44:17.464575 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-120d91ae-c06d-5ca9-b450-85f2d491e96a', 'data_vg': 'ceph-120d91ae-c06d-5ca9-b450-85f2d491e96a'})  2026-03-10 00:44:17.464588 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07a8a029-b5c8-5530-8cc4-5b47064bbf55', 'data_vg': 'ceph-07a8a029-b5c8-5530-8cc4-5b47064bbf55'})  2026-03-10 00:44:17.464600 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:17.464612 | orchestrator | 2026-03-10 00:44:17.464625 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-10 00:44:17.464637 | orchestrator | Tuesday 10 March 2026 00:44:14 +0000 (0:00:00.172) 0:00:25.067 ********* 2026-03-10 00:44:17.464650 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-120d91ae-c06d-5ca9-b450-85f2d491e96a', 'data_vg': 'ceph-120d91ae-c06d-5ca9-b450-85f2d491e96a'})  2026-03-10 00:44:17.464662 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-07a8a029-b5c8-5530-8cc4-5b47064bbf55', 'data_vg': 'ceph-07a8a029-b5c8-5530-8cc4-5b47064bbf55'})  2026-03-10 00:44:17.464674 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:17.464686 | orchestrator | 2026-03-10 00:44:17.464698 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-10 00:44:17.464710 | orchestrator | Tuesday 10 March 2026 00:44:14 +0000 (0:00:00.183) 0:00:25.251 ********* 2026-03-10 00:44:17.464741 | orchestrator | ok: [testbed-node-3] => { 2026-03-10 00:44:17.464753 | orchestrator |  "lvm_report": { 2026-03-10 00:44:17.464764 | orchestrator |  "lv": [ 2026-03-10 00:44:17.464775 | orchestrator |  { 2026-03-10 00:44:17.464786 | orchestrator |  "lv_name": "osd-block-07a8a029-b5c8-5530-8cc4-5b47064bbf55", 2026-03-10 00:44:17.464798 | orchestrator |  "vg_name": "ceph-07a8a029-b5c8-5530-8cc4-5b47064bbf55" 2026-03-10 00:44:17.464809 | orchestrator |  }, 2026-03-10 00:44:17.464820 | orchestrator |  { 2026-03-10 00:44:17.464831 | orchestrator |  "lv_name": "osd-block-120d91ae-c06d-5ca9-b450-85f2d491e96a", 2026-03-10 00:44:17.464842 | orchestrator |  "vg_name": "ceph-120d91ae-c06d-5ca9-b450-85f2d491e96a" 2026-03-10 00:44:17.464852 | orchestrator |  } 2026-03-10 00:44:17.464863 | orchestrator |  ], 2026-03-10 00:44:17.464874 | orchestrator |  "pv": [ 2026-03-10 00:44:17.464885 | orchestrator |  { 2026-03-10 00:44:17.464895 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-10 00:44:17.464906 | orchestrator |  "vg_name": "ceph-120d91ae-c06d-5ca9-b450-85f2d491e96a" 2026-03-10 00:44:17.464917 | orchestrator |  }, 2026-03-10 00:44:17.464928 | orchestrator |  { 2026-03-10 00:44:17.464938 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-10 00:44:17.464949 | orchestrator |  "vg_name": "ceph-07a8a029-b5c8-5530-8cc4-5b47064bbf55" 2026-03-10 00:44:17.464960 | orchestrator |  } 2026-03-10 00:44:17.464971 | orchestrator |  ] 2026-03-10 00:44:17.464981 | orchestrator |  } 2026-03-10 00:44:17.464992 | orchestrator | } 2026-03-10 00:44:17.465004 | orchestrator | 2026-03-10 00:44:17.465015 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-10 00:44:17.465034 | orchestrator | 2026-03-10 00:44:17.465059 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-10 00:44:17.465085 | orchestrator | Tuesday 10 March 2026 00:44:14 +0000 (0:00:00.313) 0:00:25.564 ********* 2026-03-10 00:44:17.465118 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-10 00:44:17.465137 | orchestrator | 2026-03-10 00:44:17.465155 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-10 00:44:17.465174 | orchestrator | Tuesday 10 March 2026 00:44:15 +0000 (0:00:00.252) 0:00:25.817 ********* 2026-03-10 00:44:17.465194 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:44:17.465213 | orchestrator | 2026-03-10 00:44:17.465232 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:17.465252 | orchestrator | Tuesday 10 March 2026 00:44:15 +0000 (0:00:00.267) 0:00:26.085 ********* 2026-03-10 00:44:17.465273 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-10 00:44:17.465292 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-10 00:44:17.465312 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-10 00:44:17.465327 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-10 00:44:17.465338 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-10 00:44:17.465348 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-10 00:44:17.465359 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-10 00:44:17.465458 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-10 00:44:17.465484 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-10 00:44:17.465502 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-10 00:44:17.465521 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-10 00:44:17.465539 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-10 00:44:17.465557 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-10 00:44:17.465575 | orchestrator | 2026-03-10 00:44:17.465592 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:17.465608 | orchestrator | Tuesday 10 March 2026 00:44:15 +0000 (0:00:00.427) 0:00:26.513 ********* 2026-03-10 00:44:17.465626 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:17.465644 | orchestrator | 2026-03-10 00:44:17.465663 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:17.465682 | orchestrator | Tuesday 10 March 2026 00:44:15 +0000 (0:00:00.206) 0:00:26.719 ********* 2026-03-10 00:44:17.465701 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:17.465718 | orchestrator | 2026-03-10 00:44:17.465738 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:17.465756 | orchestrator | Tuesday 10 March 2026 00:44:16 +0000 (0:00:00.210) 0:00:26.929 ********* 2026-03-10 00:44:17.465774 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:17.465791 | orchestrator | 2026-03-10 00:44:17.465810 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:17.465828 | orchestrator | Tuesday 10 March 2026 00:44:16 +0000 (0:00:00.695) 0:00:27.625 ********* 2026-03-10 00:44:17.465847 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:17.465865 | orchestrator | 2026-03-10 00:44:17.465884 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:17.465897 | orchestrator | Tuesday 10 March 2026 00:44:17 +0000 (0:00:00.210) 0:00:27.836 ********* 2026-03-10 00:44:17.465908 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:17.465918 | orchestrator | 2026-03-10 00:44:17.465929 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:17.465940 | orchestrator | Tuesday 10 March 2026 00:44:17 +0000 (0:00:00.192) 0:00:28.028 ********* 2026-03-10 00:44:17.465961 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:17.465972 | orchestrator | 2026-03-10 00:44:17.465996 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:29.633426 | orchestrator | Tuesday 10 March 2026 00:44:17 +0000 (0:00:00.196) 0:00:28.224 ********* 2026-03-10 00:44:29.633530 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:29.633546 | orchestrator | 2026-03-10 00:44:29.633559 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:29.633570 | orchestrator | Tuesday 10 March 2026 00:44:17 +0000 (0:00:00.209) 0:00:28.434 ********* 2026-03-10 00:44:29.633581 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:29.633593 | orchestrator | 2026-03-10 00:44:29.633604 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:29.633615 | orchestrator | Tuesday 10 March 2026 00:44:17 +0000 (0:00:00.191) 0:00:28.626 ********* 2026-03-10 00:44:29.633626 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8) 2026-03-10 00:44:29.633637 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8) 2026-03-10 00:44:29.633648 | orchestrator | 2026-03-10 00:44:29.633659 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:29.633670 | orchestrator | Tuesday 10 March 2026 00:44:18 +0000 (0:00:00.491) 0:00:29.117 ********* 2026-03-10 00:44:29.633680 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1d3a34ea-f16d-4f10-8269-5937a58b6a14) 2026-03-10 00:44:29.633692 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1d3a34ea-f16d-4f10-8269-5937a58b6a14) 2026-03-10 00:44:29.633702 | orchestrator | 2026-03-10 00:44:29.633713 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:29.633725 | orchestrator | Tuesday 10 March 2026 00:44:18 +0000 (0:00:00.471) 0:00:29.589 ********* 2026-03-10 00:44:29.633744 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b7d8aa34-d63a-4976-a853-b9d2680122e0) 2026-03-10 00:44:29.633763 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b7d8aa34-d63a-4976-a853-b9d2680122e0) 2026-03-10 00:44:29.633781 | orchestrator | 2026-03-10 00:44:29.633799 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:29.633817 | orchestrator | Tuesday 10 March 2026 00:44:19 +0000 (0:00:00.417) 0:00:30.006 ********* 2026-03-10 00:44:29.633833 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_497bc817-8b42-47c9-935c-36bd3332f08b) 2026-03-10 00:44:29.633849 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_497bc817-8b42-47c9-935c-36bd3332f08b) 2026-03-10 00:44:29.633864 | orchestrator | 2026-03-10 00:44:29.633882 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:29.633900 | orchestrator | Tuesday 10 March 2026 00:44:19 +0000 (0:00:00.682) 0:00:30.689 ********* 2026-03-10 00:44:29.633918 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-10 00:44:29.633936 | orchestrator | 2026-03-10 00:44:29.633954 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:29.633973 | orchestrator | Tuesday 10 March 2026 00:44:20 +0000 (0:00:00.570) 0:00:31.260 ********* 2026-03-10 00:44:29.633991 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-10 00:44:29.634012 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-10 00:44:29.634122 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-10 00:44:29.634136 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-10 00:44:29.634148 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-10 00:44:29.634161 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-10 00:44:29.634201 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-10 00:44:29.634214 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-10 00:44:29.634226 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-10 00:44:29.634238 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-10 00:44:29.634251 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-10 00:44:29.634263 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-10 00:44:29.634276 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-10 00:44:29.634288 | orchestrator | 2026-03-10 00:44:29.634301 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:29.634313 | orchestrator | Tuesday 10 March 2026 00:44:21 +0000 (0:00:00.938) 0:00:32.199 ********* 2026-03-10 00:44:29.634324 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:29.634335 | orchestrator | 2026-03-10 00:44:29.634346 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:29.634373 | orchestrator | Tuesday 10 March 2026 00:44:21 +0000 (0:00:00.209) 0:00:32.409 ********* 2026-03-10 00:44:29.634417 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:29.634428 | orchestrator | 2026-03-10 00:44:29.634439 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:29.634450 | orchestrator | Tuesday 10 March 2026 00:44:21 +0000 (0:00:00.224) 0:00:32.633 ********* 2026-03-10 00:44:29.634460 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:29.634471 | orchestrator | 2026-03-10 00:44:29.634504 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:29.634516 | orchestrator | Tuesday 10 March 2026 00:44:22 +0000 (0:00:00.213) 0:00:32.846 ********* 2026-03-10 00:44:29.634527 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:29.634537 | orchestrator | 2026-03-10 00:44:29.634548 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:29.634559 | orchestrator | Tuesday 10 March 2026 00:44:22 +0000 (0:00:00.203) 0:00:33.050 ********* 2026-03-10 00:44:29.634569 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:29.634580 | orchestrator | 2026-03-10 00:44:29.634591 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:29.634601 | orchestrator | Tuesday 10 March 2026 00:44:22 +0000 (0:00:00.226) 0:00:33.276 ********* 2026-03-10 00:44:29.634612 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:29.634623 | orchestrator | 2026-03-10 00:44:29.634633 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:29.634644 | orchestrator | Tuesday 10 March 2026 00:44:22 +0000 (0:00:00.254) 0:00:33.531 ********* 2026-03-10 00:44:29.634655 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:29.634666 | orchestrator | 2026-03-10 00:44:29.634676 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:29.634687 | orchestrator | Tuesday 10 March 2026 00:44:23 +0000 (0:00:00.291) 0:00:33.823 ********* 2026-03-10 00:44:29.634698 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:29.634708 | orchestrator | 2026-03-10 00:44:29.634719 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:29.634730 | orchestrator | Tuesday 10 March 2026 00:44:23 +0000 (0:00:00.224) 0:00:34.048 ********* 2026-03-10 00:44:29.634740 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-10 00:44:29.634751 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-10 00:44:29.634763 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-10 00:44:29.634774 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-10 00:44:29.634784 | orchestrator | 2026-03-10 00:44:29.634795 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:29.634816 | orchestrator | Tuesday 10 March 2026 00:44:24 +0000 (0:00:01.005) 0:00:35.053 ********* 2026-03-10 00:44:29.634827 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:29.634838 | orchestrator | 2026-03-10 00:44:29.634849 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:29.634860 | orchestrator | Tuesday 10 March 2026 00:44:24 +0000 (0:00:00.308) 0:00:35.362 ********* 2026-03-10 00:44:29.634870 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:29.634881 | orchestrator | 2026-03-10 00:44:29.634892 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:29.634902 | orchestrator | Tuesday 10 March 2026 00:44:25 +0000 (0:00:00.748) 0:00:36.110 ********* 2026-03-10 00:44:29.634913 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:29.634924 | orchestrator | 2026-03-10 00:44:29.634935 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:29.634946 | orchestrator | Tuesday 10 March 2026 00:44:25 +0000 (0:00:00.221) 0:00:36.332 ********* 2026-03-10 00:44:29.634956 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:29.634967 | orchestrator | 2026-03-10 00:44:29.634978 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-10 00:44:29.634994 | orchestrator | Tuesday 10 March 2026 00:44:25 +0000 (0:00:00.259) 0:00:36.591 ********* 2026-03-10 00:44:29.635005 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:29.635016 | orchestrator | 2026-03-10 00:44:29.635026 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-10 00:44:29.635037 | orchestrator | Tuesday 10 March 2026 00:44:25 +0000 (0:00:00.172) 0:00:36.764 ********* 2026-03-10 00:44:29.635048 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d'}}) 2026-03-10 00:44:29.635059 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e8bae358-0d63-5788-ab6b-8bf409d6bda1'}}) 2026-03-10 00:44:29.635070 | orchestrator | 2026-03-10 00:44:29.635081 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-10 00:44:29.635091 | orchestrator | Tuesday 10 March 2026 00:44:26 +0000 (0:00:00.222) 0:00:36.986 ********* 2026-03-10 00:44:29.635103 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d', 'data_vg': 'ceph-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d'}) 2026-03-10 00:44:29.635116 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e8bae358-0d63-5788-ab6b-8bf409d6bda1', 'data_vg': 'ceph-e8bae358-0d63-5788-ab6b-8bf409d6bda1'}) 2026-03-10 00:44:29.635127 | orchestrator | 2026-03-10 00:44:29.635138 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-10 00:44:29.635148 | orchestrator | Tuesday 10 March 2026 00:44:28 +0000 (0:00:01.885) 0:00:38.871 ********* 2026-03-10 00:44:29.635159 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d', 'data_vg': 'ceph-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d'})  2026-03-10 00:44:29.635171 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e8bae358-0d63-5788-ab6b-8bf409d6bda1', 'data_vg': 'ceph-e8bae358-0d63-5788-ab6b-8bf409d6bda1'})  2026-03-10 00:44:29.635182 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:29.635193 | orchestrator | 2026-03-10 00:44:29.635203 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-10 00:44:29.635214 | orchestrator | Tuesday 10 March 2026 00:44:28 +0000 (0:00:00.165) 0:00:39.036 ********* 2026-03-10 00:44:29.635225 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d', 'data_vg': 'ceph-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d'}) 2026-03-10 00:44:29.635242 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e8bae358-0d63-5788-ab6b-8bf409d6bda1', 'data_vg': 'ceph-e8bae358-0d63-5788-ab6b-8bf409d6bda1'}) 2026-03-10 00:44:35.648016 | orchestrator | 2026-03-10 00:44:35.648124 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-10 00:44:35.648166 | orchestrator | Tuesday 10 March 2026 00:44:29 +0000 (0:00:01.353) 0:00:40.390 ********* 2026-03-10 00:44:35.648179 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d', 'data_vg': 'ceph-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d'})  2026-03-10 00:44:35.648192 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e8bae358-0d63-5788-ab6b-8bf409d6bda1', 'data_vg': 'ceph-e8bae358-0d63-5788-ab6b-8bf409d6bda1'})  2026-03-10 00:44:35.648204 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:35.648216 | orchestrator | 2026-03-10 00:44:35.648227 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-10 00:44:35.648238 | orchestrator | Tuesday 10 March 2026 00:44:29 +0000 (0:00:00.156) 0:00:40.547 ********* 2026-03-10 00:44:35.648249 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:35.648260 | orchestrator | 2026-03-10 00:44:35.648276 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-10 00:44:35.648294 | orchestrator | Tuesday 10 March 2026 00:44:29 +0000 (0:00:00.152) 0:00:40.699 ********* 2026-03-10 00:44:35.648317 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d', 'data_vg': 'ceph-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d'})  2026-03-10 00:44:35.648341 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e8bae358-0d63-5788-ab6b-8bf409d6bda1', 'data_vg': 'ceph-e8bae358-0d63-5788-ab6b-8bf409d6bda1'})  2026-03-10 00:44:35.648360 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:35.648462 | orchestrator | 2026-03-10 00:44:35.648482 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-10 00:44:35.648499 | orchestrator | Tuesday 10 March 2026 00:44:30 +0000 (0:00:00.148) 0:00:40.847 ********* 2026-03-10 00:44:35.648515 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:35.648531 | orchestrator | 2026-03-10 00:44:35.648546 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-10 00:44:35.648561 | orchestrator | Tuesday 10 March 2026 00:44:30 +0000 (0:00:00.142) 0:00:40.990 ********* 2026-03-10 00:44:35.648577 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d', 'data_vg': 'ceph-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d'})  2026-03-10 00:44:35.648595 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e8bae358-0d63-5788-ab6b-8bf409d6bda1', 'data_vg': 'ceph-e8bae358-0d63-5788-ab6b-8bf409d6bda1'})  2026-03-10 00:44:35.648612 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:35.648630 | orchestrator | 2026-03-10 00:44:35.648648 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-10 00:44:35.648682 | orchestrator | Tuesday 10 March 2026 00:44:30 +0000 (0:00:00.463) 0:00:41.453 ********* 2026-03-10 00:44:35.648699 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:35.648715 | orchestrator | 2026-03-10 00:44:35.648733 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-10 00:44:35.648751 | orchestrator | Tuesday 10 March 2026 00:44:30 +0000 (0:00:00.157) 0:00:41.611 ********* 2026-03-10 00:44:35.648769 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d', 'data_vg': 'ceph-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d'})  2026-03-10 00:44:35.648787 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e8bae358-0d63-5788-ab6b-8bf409d6bda1', 'data_vg': 'ceph-e8bae358-0d63-5788-ab6b-8bf409d6bda1'})  2026-03-10 00:44:35.648804 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:35.648823 | orchestrator | 2026-03-10 00:44:35.648842 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-10 00:44:35.648860 | orchestrator | Tuesday 10 March 2026 00:44:30 +0000 (0:00:00.150) 0:00:41.761 ********* 2026-03-10 00:44:35.648879 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:44:35.648900 | orchestrator | 2026-03-10 00:44:35.648918 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-10 00:44:35.648953 | orchestrator | Tuesday 10 March 2026 00:44:31 +0000 (0:00:00.154) 0:00:41.915 ********* 2026-03-10 00:44:35.648973 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d', 'data_vg': 'ceph-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d'})  2026-03-10 00:44:35.648992 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e8bae358-0d63-5788-ab6b-8bf409d6bda1', 'data_vg': 'ceph-e8bae358-0d63-5788-ab6b-8bf409d6bda1'})  2026-03-10 00:44:35.649010 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:35.649028 | orchestrator | 2026-03-10 00:44:35.649047 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-10 00:44:35.649067 | orchestrator | Tuesday 10 March 2026 00:44:31 +0000 (0:00:00.191) 0:00:42.107 ********* 2026-03-10 00:44:35.649085 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d', 'data_vg': 'ceph-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d'})  2026-03-10 00:44:35.649097 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e8bae358-0d63-5788-ab6b-8bf409d6bda1', 'data_vg': 'ceph-e8bae358-0d63-5788-ab6b-8bf409d6bda1'})  2026-03-10 00:44:35.649108 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:35.649119 | orchestrator | 2026-03-10 00:44:35.649130 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-10 00:44:35.649161 | orchestrator | Tuesday 10 March 2026 00:44:31 +0000 (0:00:00.168) 0:00:42.275 ********* 2026-03-10 00:44:35.649173 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d', 'data_vg': 'ceph-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d'})  2026-03-10 00:44:35.649183 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e8bae358-0d63-5788-ab6b-8bf409d6bda1', 'data_vg': 'ceph-e8bae358-0d63-5788-ab6b-8bf409d6bda1'})  2026-03-10 00:44:35.649194 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:35.649205 | orchestrator | 2026-03-10 00:44:35.649216 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-10 00:44:35.649227 | orchestrator | Tuesday 10 March 2026 00:44:31 +0000 (0:00:00.156) 0:00:42.432 ********* 2026-03-10 00:44:35.649237 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:35.649248 | orchestrator | 2026-03-10 00:44:35.649259 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-10 00:44:35.649269 | orchestrator | Tuesday 10 March 2026 00:44:31 +0000 (0:00:00.143) 0:00:42.576 ********* 2026-03-10 00:44:35.649280 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:35.649291 | orchestrator | 2026-03-10 00:44:35.649301 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-10 00:44:35.649312 | orchestrator | Tuesday 10 March 2026 00:44:31 +0000 (0:00:00.150) 0:00:42.726 ********* 2026-03-10 00:44:35.649323 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:35.649333 | orchestrator | 2026-03-10 00:44:35.649344 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-10 00:44:35.649355 | orchestrator | Tuesday 10 March 2026 00:44:32 +0000 (0:00:00.150) 0:00:42.877 ********* 2026-03-10 00:44:35.649365 | orchestrator | ok: [testbed-node-4] => { 2026-03-10 00:44:35.649411 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-10 00:44:35.649424 | orchestrator | } 2026-03-10 00:44:35.649435 | orchestrator | 2026-03-10 00:44:35.649447 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-10 00:44:35.649458 | orchestrator | Tuesday 10 March 2026 00:44:32 +0000 (0:00:00.171) 0:00:43.048 ********* 2026-03-10 00:44:35.649468 | orchestrator | ok: [testbed-node-4] => { 2026-03-10 00:44:35.649479 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-10 00:44:35.649490 | orchestrator | } 2026-03-10 00:44:35.649500 | orchestrator | 2026-03-10 00:44:35.649511 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-10 00:44:35.649522 | orchestrator | Tuesday 10 March 2026 00:44:32 +0000 (0:00:00.200) 0:00:43.248 ********* 2026-03-10 00:44:35.649541 | orchestrator | ok: [testbed-node-4] => { 2026-03-10 00:44:35.649553 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-10 00:44:35.649564 | orchestrator | } 2026-03-10 00:44:35.649574 | orchestrator | 2026-03-10 00:44:35.649585 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-10 00:44:35.649596 | orchestrator | Tuesday 10 March 2026 00:44:32 +0000 (0:00:00.364) 0:00:43.613 ********* 2026-03-10 00:44:35.649607 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:44:35.649617 | orchestrator | 2026-03-10 00:44:35.649628 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-10 00:44:35.649639 | orchestrator | Tuesday 10 March 2026 00:44:33 +0000 (0:00:00.527) 0:00:44.141 ********* 2026-03-10 00:44:35.649650 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:44:35.649661 | orchestrator | 2026-03-10 00:44:35.649672 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-10 00:44:35.649683 | orchestrator | Tuesday 10 March 2026 00:44:33 +0000 (0:00:00.530) 0:00:44.671 ********* 2026-03-10 00:44:35.649694 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:44:35.649705 | orchestrator | 2026-03-10 00:44:35.649716 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-10 00:44:35.649726 | orchestrator | Tuesday 10 March 2026 00:44:34 +0000 (0:00:00.581) 0:00:45.253 ********* 2026-03-10 00:44:35.649737 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:44:35.649748 | orchestrator | 2026-03-10 00:44:35.649759 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-10 00:44:35.649769 | orchestrator | Tuesday 10 March 2026 00:44:34 +0000 (0:00:00.157) 0:00:45.411 ********* 2026-03-10 00:44:35.649780 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:35.649791 | orchestrator | 2026-03-10 00:44:35.649801 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-10 00:44:35.649825 | orchestrator | Tuesday 10 March 2026 00:44:34 +0000 (0:00:00.125) 0:00:45.537 ********* 2026-03-10 00:44:35.649836 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:35.649858 | orchestrator | 2026-03-10 00:44:35.649869 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-10 00:44:35.649880 | orchestrator | Tuesday 10 March 2026 00:44:34 +0000 (0:00:00.127) 0:00:45.664 ********* 2026-03-10 00:44:35.649891 | orchestrator | ok: [testbed-node-4] => { 2026-03-10 00:44:35.649902 | orchestrator |  "vgs_report": { 2026-03-10 00:44:35.649913 | orchestrator |  "vg": [] 2026-03-10 00:44:35.649925 | orchestrator |  } 2026-03-10 00:44:35.649936 | orchestrator | } 2026-03-10 00:44:35.649947 | orchestrator | 2026-03-10 00:44:35.649958 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-10 00:44:35.649969 | orchestrator | Tuesday 10 March 2026 00:44:35 +0000 (0:00:00.158) 0:00:45.823 ********* 2026-03-10 00:44:35.649979 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:35.649990 | orchestrator | 2026-03-10 00:44:35.650001 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-10 00:44:35.650012 | orchestrator | Tuesday 10 March 2026 00:44:35 +0000 (0:00:00.136) 0:00:45.960 ********* 2026-03-10 00:44:35.650097 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:35.650108 | orchestrator | 2026-03-10 00:44:35.650119 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-10 00:44:35.650130 | orchestrator | Tuesday 10 March 2026 00:44:35 +0000 (0:00:00.145) 0:00:46.105 ********* 2026-03-10 00:44:35.650141 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:35.650152 | orchestrator | 2026-03-10 00:44:35.650163 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-10 00:44:35.650186 | orchestrator | Tuesday 10 March 2026 00:44:35 +0000 (0:00:00.146) 0:00:46.252 ********* 2026-03-10 00:44:35.650198 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:35.650209 | orchestrator | 2026-03-10 00:44:35.650230 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-10 00:44:40.696252 | orchestrator | Tuesday 10 March 2026 00:44:35 +0000 (0:00:00.151) 0:00:46.404 ********* 2026-03-10 00:44:40.696360 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:40.696396 | orchestrator | 2026-03-10 00:44:40.696407 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-10 00:44:40.696415 | orchestrator | Tuesday 10 March 2026 00:44:36 +0000 (0:00:00.381) 0:00:46.785 ********* 2026-03-10 00:44:40.696424 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:40.696436 | orchestrator | 2026-03-10 00:44:40.696454 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-10 00:44:40.696470 | orchestrator | Tuesday 10 March 2026 00:44:36 +0000 (0:00:00.145) 0:00:46.931 ********* 2026-03-10 00:44:40.696482 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:40.696494 | orchestrator | 2026-03-10 00:44:40.696506 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-10 00:44:40.696519 | orchestrator | Tuesday 10 March 2026 00:44:36 +0000 (0:00:00.141) 0:00:47.073 ********* 2026-03-10 00:44:40.696531 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:40.696543 | orchestrator | 2026-03-10 00:44:40.696554 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-10 00:44:40.696567 | orchestrator | Tuesday 10 March 2026 00:44:36 +0000 (0:00:00.153) 0:00:47.226 ********* 2026-03-10 00:44:40.696579 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:40.696592 | orchestrator | 2026-03-10 00:44:40.696605 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-10 00:44:40.696618 | orchestrator | Tuesday 10 March 2026 00:44:36 +0000 (0:00:00.156) 0:00:47.382 ********* 2026-03-10 00:44:40.696631 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:40.696643 | orchestrator | 2026-03-10 00:44:40.696656 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-10 00:44:40.696669 | orchestrator | Tuesday 10 March 2026 00:44:36 +0000 (0:00:00.146) 0:00:47.529 ********* 2026-03-10 00:44:40.696681 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:40.696695 | orchestrator | 2026-03-10 00:44:40.696707 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-10 00:44:40.696720 | orchestrator | Tuesday 10 March 2026 00:44:36 +0000 (0:00:00.157) 0:00:47.686 ********* 2026-03-10 00:44:40.696734 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:40.696747 | orchestrator | 2026-03-10 00:44:40.696760 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-10 00:44:40.696768 | orchestrator | Tuesday 10 March 2026 00:44:37 +0000 (0:00:00.145) 0:00:47.832 ********* 2026-03-10 00:44:40.696775 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:40.696784 | orchestrator | 2026-03-10 00:44:40.696792 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-10 00:44:40.696800 | orchestrator | Tuesday 10 March 2026 00:44:37 +0000 (0:00:00.131) 0:00:47.964 ********* 2026-03-10 00:44:40.696808 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:40.696817 | orchestrator | 2026-03-10 00:44:40.696825 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-10 00:44:40.696846 | orchestrator | Tuesday 10 March 2026 00:44:37 +0000 (0:00:00.133) 0:00:48.098 ********* 2026-03-10 00:44:40.696857 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d', 'data_vg': 'ceph-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d'})  2026-03-10 00:44:40.696867 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e8bae358-0d63-5788-ab6b-8bf409d6bda1', 'data_vg': 'ceph-e8bae358-0d63-5788-ab6b-8bf409d6bda1'})  2026-03-10 00:44:40.696875 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:40.696883 | orchestrator | 2026-03-10 00:44:40.696891 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-10 00:44:40.696900 | orchestrator | Tuesday 10 March 2026 00:44:37 +0000 (0:00:00.170) 0:00:48.268 ********* 2026-03-10 00:44:40.696913 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d', 'data_vg': 'ceph-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d'})  2026-03-10 00:44:40.696939 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e8bae358-0d63-5788-ab6b-8bf409d6bda1', 'data_vg': 'ceph-e8bae358-0d63-5788-ab6b-8bf409d6bda1'})  2026-03-10 00:44:40.696951 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:40.696964 | orchestrator | 2026-03-10 00:44:40.696975 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-10 00:44:40.696988 | orchestrator | Tuesday 10 March 2026 00:44:37 +0000 (0:00:00.172) 0:00:48.440 ********* 2026-03-10 00:44:40.697001 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d', 'data_vg': 'ceph-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d'})  2026-03-10 00:44:40.697013 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e8bae358-0d63-5788-ab6b-8bf409d6bda1', 'data_vg': 'ceph-e8bae358-0d63-5788-ab6b-8bf409d6bda1'})  2026-03-10 00:44:40.697025 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:40.697037 | orchestrator | 2026-03-10 00:44:40.697049 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-10 00:44:40.697061 | orchestrator | Tuesday 10 March 2026 00:44:38 +0000 (0:00:00.383) 0:00:48.824 ********* 2026-03-10 00:44:40.697073 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d', 'data_vg': 'ceph-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d'})  2026-03-10 00:44:40.697086 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e8bae358-0d63-5788-ab6b-8bf409d6bda1', 'data_vg': 'ceph-e8bae358-0d63-5788-ab6b-8bf409d6bda1'})  2026-03-10 00:44:40.697099 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:40.697111 | orchestrator | 2026-03-10 00:44:40.697143 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-10 00:44:40.697156 | orchestrator | Tuesday 10 March 2026 00:44:38 +0000 (0:00:00.157) 0:00:48.981 ********* 2026-03-10 00:44:40.697169 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d', 'data_vg': 'ceph-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d'})  2026-03-10 00:44:40.697182 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e8bae358-0d63-5788-ab6b-8bf409d6bda1', 'data_vg': 'ceph-e8bae358-0d63-5788-ab6b-8bf409d6bda1'})  2026-03-10 00:44:40.697193 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:40.697206 | orchestrator | 2026-03-10 00:44:40.697214 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-10 00:44:40.697221 | orchestrator | Tuesday 10 March 2026 00:44:38 +0000 (0:00:00.161) 0:00:49.143 ********* 2026-03-10 00:44:40.697228 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d', 'data_vg': 'ceph-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d'})  2026-03-10 00:44:40.697237 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e8bae358-0d63-5788-ab6b-8bf409d6bda1', 'data_vg': 'ceph-e8bae358-0d63-5788-ab6b-8bf409d6bda1'})  2026-03-10 00:44:40.697245 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:40.697257 | orchestrator | 2026-03-10 00:44:40.697268 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-10 00:44:40.697281 | orchestrator | Tuesday 10 March 2026 00:44:38 +0000 (0:00:00.176) 0:00:49.319 ********* 2026-03-10 00:44:40.697293 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d', 'data_vg': 'ceph-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d'})  2026-03-10 00:44:40.697306 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e8bae358-0d63-5788-ab6b-8bf409d6bda1', 'data_vg': 'ceph-e8bae358-0d63-5788-ab6b-8bf409d6bda1'})  2026-03-10 00:44:40.697318 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:40.697330 | orchestrator | 2026-03-10 00:44:40.697342 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-10 00:44:40.697354 | orchestrator | Tuesday 10 March 2026 00:44:38 +0000 (0:00:00.178) 0:00:49.498 ********* 2026-03-10 00:44:40.697367 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d', 'data_vg': 'ceph-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d'})  2026-03-10 00:44:40.697472 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e8bae358-0d63-5788-ab6b-8bf409d6bda1', 'data_vg': 'ceph-e8bae358-0d63-5788-ab6b-8bf409d6bda1'})  2026-03-10 00:44:40.697494 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:40.697508 | orchestrator | 2026-03-10 00:44:40.697521 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-10 00:44:40.697534 | orchestrator | Tuesday 10 March 2026 00:44:38 +0000 (0:00:00.176) 0:00:49.674 ********* 2026-03-10 00:44:40.697547 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:44:40.697561 | orchestrator | 2026-03-10 00:44:40.697575 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-10 00:44:40.697587 | orchestrator | Tuesday 10 March 2026 00:44:39 +0000 (0:00:00.536) 0:00:50.211 ********* 2026-03-10 00:44:40.697600 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:44:40.697613 | orchestrator | 2026-03-10 00:44:40.697626 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-10 00:44:40.697639 | orchestrator | Tuesday 10 March 2026 00:44:40 +0000 (0:00:00.561) 0:00:50.773 ********* 2026-03-10 00:44:40.697652 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:44:40.697665 | orchestrator | 2026-03-10 00:44:40.697678 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-10 00:44:40.697690 | orchestrator | Tuesday 10 March 2026 00:44:40 +0000 (0:00:00.169) 0:00:50.943 ********* 2026-03-10 00:44:40.697698 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d', 'vg_name': 'ceph-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d'}) 2026-03-10 00:44:40.697707 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-e8bae358-0d63-5788-ab6b-8bf409d6bda1', 'vg_name': 'ceph-e8bae358-0d63-5788-ab6b-8bf409d6bda1'}) 2026-03-10 00:44:40.697714 | orchestrator | 2026-03-10 00:44:40.697721 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-10 00:44:40.697728 | orchestrator | Tuesday 10 March 2026 00:44:40 +0000 (0:00:00.177) 0:00:51.120 ********* 2026-03-10 00:44:40.697735 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d', 'data_vg': 'ceph-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d'})  2026-03-10 00:44:40.697742 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e8bae358-0d63-5788-ab6b-8bf409d6bda1', 'data_vg': 'ceph-e8bae358-0d63-5788-ab6b-8bf409d6bda1'})  2026-03-10 00:44:40.697749 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:40.697756 | orchestrator | 2026-03-10 00:44:40.697764 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-10 00:44:40.697771 | orchestrator | Tuesday 10 March 2026 00:44:40 +0000 (0:00:00.158) 0:00:51.278 ********* 2026-03-10 00:44:40.697778 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d', 'data_vg': 'ceph-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d'})  2026-03-10 00:44:40.697794 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e8bae358-0d63-5788-ab6b-8bf409d6bda1', 'data_vg': 'ceph-e8bae358-0d63-5788-ab6b-8bf409d6bda1'})  2026-03-10 00:44:47.311831 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:47.311950 | orchestrator | 2026-03-10 00:44:47.311970 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-10 00:44:47.311988 | orchestrator | Tuesday 10 March 2026 00:44:40 +0000 (0:00:00.179) 0:00:51.457 ********* 2026-03-10 00:44:47.312003 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d', 'data_vg': 'ceph-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d'})  2026-03-10 00:44:47.312019 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e8bae358-0d63-5788-ab6b-8bf409d6bda1', 'data_vg': 'ceph-e8bae358-0d63-5788-ab6b-8bf409d6bda1'})  2026-03-10 00:44:47.312027 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:47.312035 | orchestrator | 2026-03-10 00:44:47.312043 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-10 00:44:47.312073 | orchestrator | Tuesday 10 March 2026 00:44:40 +0000 (0:00:00.171) 0:00:51.629 ********* 2026-03-10 00:44:47.312082 | orchestrator | ok: [testbed-node-4] => { 2026-03-10 00:44:47.312090 | orchestrator |  "lvm_report": { 2026-03-10 00:44:47.312099 | orchestrator |  "lv": [ 2026-03-10 00:44:47.312107 | orchestrator |  { 2026-03-10 00:44:47.312115 | orchestrator |  "lv_name": "osd-block-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d", 2026-03-10 00:44:47.312124 | orchestrator |  "vg_name": "ceph-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d" 2026-03-10 00:44:47.312132 | orchestrator |  }, 2026-03-10 00:44:47.312140 | orchestrator |  { 2026-03-10 00:44:47.312147 | orchestrator |  "lv_name": "osd-block-e8bae358-0d63-5788-ab6b-8bf409d6bda1", 2026-03-10 00:44:47.312155 | orchestrator |  "vg_name": "ceph-e8bae358-0d63-5788-ab6b-8bf409d6bda1" 2026-03-10 00:44:47.312163 | orchestrator |  } 2026-03-10 00:44:47.312171 | orchestrator |  ], 2026-03-10 00:44:47.312178 | orchestrator |  "pv": [ 2026-03-10 00:44:47.312186 | orchestrator |  { 2026-03-10 00:44:47.312194 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-10 00:44:47.312202 | orchestrator |  "vg_name": "ceph-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d" 2026-03-10 00:44:47.312209 | orchestrator |  }, 2026-03-10 00:44:47.312217 | orchestrator |  { 2026-03-10 00:44:47.312225 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-10 00:44:47.312232 | orchestrator |  "vg_name": "ceph-e8bae358-0d63-5788-ab6b-8bf409d6bda1" 2026-03-10 00:44:47.312240 | orchestrator |  } 2026-03-10 00:44:47.312248 | orchestrator |  ] 2026-03-10 00:44:47.312255 | orchestrator |  } 2026-03-10 00:44:47.312263 | orchestrator | } 2026-03-10 00:44:47.312271 | orchestrator | 2026-03-10 00:44:47.312279 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-10 00:44:47.312287 | orchestrator | 2026-03-10 00:44:47.312295 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-10 00:44:47.312302 | orchestrator | Tuesday 10 March 2026 00:44:41 +0000 (0:00:00.541) 0:00:52.171 ********* 2026-03-10 00:44:47.312311 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-10 00:44:47.312318 | orchestrator | 2026-03-10 00:44:47.312326 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-10 00:44:47.312335 | orchestrator | Tuesday 10 March 2026 00:44:41 +0000 (0:00:00.281) 0:00:52.452 ********* 2026-03-10 00:44:47.312342 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:44:47.312350 | orchestrator | 2026-03-10 00:44:47.312358 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:47.312366 | orchestrator | Tuesday 10 March 2026 00:44:41 +0000 (0:00:00.280) 0:00:52.733 ********* 2026-03-10 00:44:47.312399 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-10 00:44:47.312408 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-10 00:44:47.312418 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-10 00:44:47.312427 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-10 00:44:47.312435 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-10 00:44:47.312444 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-10 00:44:47.312453 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-10 00:44:47.312462 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-10 00:44:47.312470 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-10 00:44:47.312479 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-10 00:44:47.312494 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-10 00:44:47.312503 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-10 00:44:47.312512 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-10 00:44:47.312521 | orchestrator | 2026-03-10 00:44:47.312535 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:47.312552 | orchestrator | Tuesday 10 March 2026 00:44:42 +0000 (0:00:00.460) 0:00:53.193 ********* 2026-03-10 00:44:47.312565 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:44:47.312578 | orchestrator | 2026-03-10 00:44:47.312592 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:47.312607 | orchestrator | Tuesday 10 March 2026 00:44:42 +0000 (0:00:00.225) 0:00:53.419 ********* 2026-03-10 00:44:47.312620 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:44:47.312634 | orchestrator | 2026-03-10 00:44:47.312644 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:47.312669 | orchestrator | Tuesday 10 March 2026 00:44:42 +0000 (0:00:00.214) 0:00:53.634 ********* 2026-03-10 00:44:47.312677 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:44:47.312685 | orchestrator | 2026-03-10 00:44:47.312693 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:47.312700 | orchestrator | Tuesday 10 March 2026 00:44:43 +0000 (0:00:00.218) 0:00:53.852 ********* 2026-03-10 00:44:47.312708 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:44:47.312716 | orchestrator | 2026-03-10 00:44:47.312724 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:47.312731 | orchestrator | Tuesday 10 March 2026 00:44:43 +0000 (0:00:00.185) 0:00:54.038 ********* 2026-03-10 00:44:47.312739 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:44:47.312746 | orchestrator | 2026-03-10 00:44:47.312754 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:47.312762 | orchestrator | Tuesday 10 March 2026 00:44:43 +0000 (0:00:00.719) 0:00:54.757 ********* 2026-03-10 00:44:47.312770 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:44:47.312777 | orchestrator | 2026-03-10 00:44:47.312785 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:47.312793 | orchestrator | Tuesday 10 March 2026 00:44:44 +0000 (0:00:00.223) 0:00:54.981 ********* 2026-03-10 00:44:47.312801 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:44:47.312808 | orchestrator | 2026-03-10 00:44:47.312816 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:47.312824 | orchestrator | Tuesday 10 March 2026 00:44:44 +0000 (0:00:00.276) 0:00:55.257 ********* 2026-03-10 00:44:47.312831 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:44:47.312839 | orchestrator | 2026-03-10 00:44:47.312847 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:47.312854 | orchestrator | Tuesday 10 March 2026 00:44:44 +0000 (0:00:00.276) 0:00:55.534 ********* 2026-03-10 00:44:47.312862 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb) 2026-03-10 00:44:47.312872 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb) 2026-03-10 00:44:47.312880 | orchestrator | 2026-03-10 00:44:47.312888 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:47.312895 | orchestrator | Tuesday 10 March 2026 00:44:45 +0000 (0:00:00.435) 0:00:55.969 ********* 2026-03-10 00:44:47.312945 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fbc5b701-e3a2-4a57-9c09-bea5a2018a77) 2026-03-10 00:44:47.312954 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fbc5b701-e3a2-4a57-9c09-bea5a2018a77) 2026-03-10 00:44:47.312962 | orchestrator | 2026-03-10 00:44:47.312970 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:47.312988 | orchestrator | Tuesday 10 March 2026 00:44:45 +0000 (0:00:00.419) 0:00:56.389 ********* 2026-03-10 00:44:47.312995 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_01fdf314-9dac-4cf9-86b2-8624031a3730) 2026-03-10 00:44:47.313003 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_01fdf314-9dac-4cf9-86b2-8624031a3730) 2026-03-10 00:44:47.313011 | orchestrator | 2026-03-10 00:44:47.313019 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:47.313027 | orchestrator | Tuesday 10 March 2026 00:44:46 +0000 (0:00:00.454) 0:00:56.843 ********* 2026-03-10 00:44:47.313035 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1827d390-92d5-42dc-b1df-e99337d10b88) 2026-03-10 00:44:47.313043 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1827d390-92d5-42dc-b1df-e99337d10b88) 2026-03-10 00:44:47.313051 | orchestrator | 2026-03-10 00:44:47.313059 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:47.313066 | orchestrator | Tuesday 10 March 2026 00:44:46 +0000 (0:00:00.441) 0:00:57.284 ********* 2026-03-10 00:44:47.313074 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-10 00:44:47.313082 | orchestrator | 2026-03-10 00:44:47.313090 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:47.313098 | orchestrator | Tuesday 10 March 2026 00:44:46 +0000 (0:00:00.337) 0:00:57.622 ********* 2026-03-10 00:44:47.313105 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-10 00:44:47.313113 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-10 00:44:47.313121 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-10 00:44:47.313129 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-10 00:44:47.313137 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-10 00:44:47.313144 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-10 00:44:47.313152 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-10 00:44:47.313160 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-10 00:44:47.313168 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-10 00:44:47.313176 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-10 00:44:47.313184 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-10 00:44:47.313198 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-10 00:44:56.412999 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-10 00:44:56.413117 | orchestrator | 2026-03-10 00:44:56.413133 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:56.413144 | orchestrator | Tuesday 10 March 2026 00:44:47 +0000 (0:00:00.441) 0:00:58.064 ********* 2026-03-10 00:44:56.413156 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:44:56.413168 | orchestrator | 2026-03-10 00:44:56.413179 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:56.413190 | orchestrator | Tuesday 10 March 2026 00:44:47 +0000 (0:00:00.219) 0:00:58.283 ********* 2026-03-10 00:44:56.413201 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:44:56.413211 | orchestrator | 2026-03-10 00:44:56.413222 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:56.413232 | orchestrator | Tuesday 10 March 2026 00:44:48 +0000 (0:00:00.703) 0:00:58.987 ********* 2026-03-10 00:44:56.413243 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:44:56.413279 | orchestrator | 2026-03-10 00:44:56.413290 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:56.413301 | orchestrator | Tuesday 10 March 2026 00:44:48 +0000 (0:00:00.207) 0:00:59.195 ********* 2026-03-10 00:44:56.413312 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:44:56.413322 | orchestrator | 2026-03-10 00:44:56.413333 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:56.413343 | orchestrator | Tuesday 10 March 2026 00:44:48 +0000 (0:00:00.224) 0:00:59.420 ********* 2026-03-10 00:44:56.413354 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:44:56.413416 | orchestrator | 2026-03-10 00:44:56.413428 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:56.413439 | orchestrator | Tuesday 10 March 2026 00:44:48 +0000 (0:00:00.207) 0:00:59.627 ********* 2026-03-10 00:44:56.413450 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:44:56.413460 | orchestrator | 2026-03-10 00:44:56.413471 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:56.413482 | orchestrator | Tuesday 10 March 2026 00:44:49 +0000 (0:00:00.191) 0:00:59.819 ********* 2026-03-10 00:44:56.413492 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:44:56.413503 | orchestrator | 2026-03-10 00:44:56.413516 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:56.413529 | orchestrator | Tuesday 10 March 2026 00:44:49 +0000 (0:00:00.201) 0:01:00.021 ********* 2026-03-10 00:44:56.413541 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:44:56.413553 | orchestrator | 2026-03-10 00:44:56.413565 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:56.413577 | orchestrator | Tuesday 10 March 2026 00:44:49 +0000 (0:00:00.256) 0:01:00.278 ********* 2026-03-10 00:44:56.413588 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-10 00:44:56.413614 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-10 00:44:56.413626 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-10 00:44:56.413637 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-10 00:44:56.413648 | orchestrator | 2026-03-10 00:44:56.413658 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:56.413669 | orchestrator | Tuesday 10 March 2026 00:44:50 +0000 (0:00:00.684) 0:01:00.962 ********* 2026-03-10 00:44:56.413680 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:44:56.413690 | orchestrator | 2026-03-10 00:44:56.413701 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:56.413712 | orchestrator | Tuesday 10 March 2026 00:44:50 +0000 (0:00:00.215) 0:01:01.178 ********* 2026-03-10 00:44:56.413723 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:44:56.413734 | orchestrator | 2026-03-10 00:44:56.413744 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:56.413755 | orchestrator | Tuesday 10 March 2026 00:44:50 +0000 (0:00:00.206) 0:01:01.384 ********* 2026-03-10 00:44:56.413766 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:44:56.413776 | orchestrator | 2026-03-10 00:44:56.413787 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:56.413797 | orchestrator | Tuesday 10 March 2026 00:44:50 +0000 (0:00:00.191) 0:01:01.576 ********* 2026-03-10 00:44:56.413808 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:44:56.413819 | orchestrator | 2026-03-10 00:44:56.413829 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-10 00:44:56.413840 | orchestrator | Tuesday 10 March 2026 00:44:51 +0000 (0:00:00.204) 0:01:01.780 ********* 2026-03-10 00:44:56.413850 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:44:56.413861 | orchestrator | 2026-03-10 00:44:56.413871 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-10 00:44:56.413882 | orchestrator | Tuesday 10 March 2026 00:44:51 +0000 (0:00:00.362) 0:01:02.143 ********* 2026-03-10 00:44:56.413892 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c0742eba-6300-5cfa-b498-a3704e14c384'}}) 2026-03-10 00:44:56.413912 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '45abfd4e-fefd-5ba8-aea8-e55d74ffeda2'}}) 2026-03-10 00:44:56.413922 | orchestrator | 2026-03-10 00:44:56.413933 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-10 00:44:56.413944 | orchestrator | Tuesday 10 March 2026 00:44:51 +0000 (0:00:00.228) 0:01:02.371 ********* 2026-03-10 00:44:56.413956 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c0742eba-6300-5cfa-b498-a3704e14c384', 'data_vg': 'ceph-c0742eba-6300-5cfa-b498-a3704e14c384'}) 2026-03-10 00:44:56.413968 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2', 'data_vg': 'ceph-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2'}) 2026-03-10 00:44:56.413979 | orchestrator | 2026-03-10 00:44:56.413989 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-10 00:44:56.414082 | orchestrator | Tuesday 10 March 2026 00:44:53 +0000 (0:00:01.848) 0:01:04.220 ********* 2026-03-10 00:44:56.414099 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0742eba-6300-5cfa-b498-a3704e14c384', 'data_vg': 'ceph-c0742eba-6300-5cfa-b498-a3704e14c384'})  2026-03-10 00:44:56.414111 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2', 'data_vg': 'ceph-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2'})  2026-03-10 00:44:56.414122 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:44:56.414132 | orchestrator | 2026-03-10 00:44:56.414143 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-10 00:44:56.414154 | orchestrator | Tuesday 10 March 2026 00:44:53 +0000 (0:00:00.163) 0:01:04.383 ********* 2026-03-10 00:44:56.414165 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c0742eba-6300-5cfa-b498-a3704e14c384', 'data_vg': 'ceph-c0742eba-6300-5cfa-b498-a3704e14c384'}) 2026-03-10 00:44:56.414176 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2', 'data_vg': 'ceph-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2'}) 2026-03-10 00:44:56.414187 | orchestrator | 2026-03-10 00:44:56.414197 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-10 00:44:56.414208 | orchestrator | Tuesday 10 March 2026 00:44:54 +0000 (0:00:01.245) 0:01:05.629 ********* 2026-03-10 00:44:56.414219 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0742eba-6300-5cfa-b498-a3704e14c384', 'data_vg': 'ceph-c0742eba-6300-5cfa-b498-a3704e14c384'})  2026-03-10 00:44:56.414230 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2', 'data_vg': 'ceph-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2'})  2026-03-10 00:44:56.414240 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:44:56.414251 | orchestrator | 2026-03-10 00:44:56.414262 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-10 00:44:56.414272 | orchestrator | Tuesday 10 March 2026 00:44:55 +0000 (0:00:00.147) 0:01:05.776 ********* 2026-03-10 00:44:56.414283 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:44:56.414293 | orchestrator | 2026-03-10 00:44:56.414304 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-10 00:44:56.414314 | orchestrator | Tuesday 10 March 2026 00:44:55 +0000 (0:00:00.142) 0:01:05.918 ********* 2026-03-10 00:44:56.414325 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0742eba-6300-5cfa-b498-a3704e14c384', 'data_vg': 'ceph-c0742eba-6300-5cfa-b498-a3704e14c384'})  2026-03-10 00:44:56.414341 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2', 'data_vg': 'ceph-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2'})  2026-03-10 00:44:56.414353 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:44:56.414398 | orchestrator | 2026-03-10 00:44:56.414410 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-10 00:44:56.414421 | orchestrator | Tuesday 10 March 2026 00:44:55 +0000 (0:00:00.141) 0:01:06.060 ********* 2026-03-10 00:44:56.414439 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:44:56.414450 | orchestrator | 2026-03-10 00:44:56.414461 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-10 00:44:56.414471 | orchestrator | Tuesday 10 March 2026 00:44:55 +0000 (0:00:00.134) 0:01:06.194 ********* 2026-03-10 00:44:56.414482 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0742eba-6300-5cfa-b498-a3704e14c384', 'data_vg': 'ceph-c0742eba-6300-5cfa-b498-a3704e14c384'})  2026-03-10 00:44:56.414493 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2', 'data_vg': 'ceph-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2'})  2026-03-10 00:44:56.414504 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:44:56.414514 | orchestrator | 2026-03-10 00:44:56.414525 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-10 00:44:56.414535 | orchestrator | Tuesday 10 March 2026 00:44:55 +0000 (0:00:00.153) 0:01:06.348 ********* 2026-03-10 00:44:56.414546 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:44:56.414556 | orchestrator | 2026-03-10 00:44:56.414567 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-10 00:44:56.414578 | orchestrator | Tuesday 10 March 2026 00:44:55 +0000 (0:00:00.127) 0:01:06.475 ********* 2026-03-10 00:44:56.414588 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0742eba-6300-5cfa-b498-a3704e14c384', 'data_vg': 'ceph-c0742eba-6300-5cfa-b498-a3704e14c384'})  2026-03-10 00:44:56.414599 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2', 'data_vg': 'ceph-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2'})  2026-03-10 00:44:56.414610 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:44:56.414620 | orchestrator | 2026-03-10 00:44:56.414631 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-10 00:44:56.414642 | orchestrator | Tuesday 10 March 2026 00:44:55 +0000 (0:00:00.148) 0:01:06.624 ********* 2026-03-10 00:44:56.414652 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:44:56.414663 | orchestrator | 2026-03-10 00:44:56.414673 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-10 00:44:56.414684 | orchestrator | Tuesday 10 March 2026 00:44:56 +0000 (0:00:00.360) 0:01:06.985 ********* 2026-03-10 00:44:56.414703 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0742eba-6300-5cfa-b498-a3704e14c384', 'data_vg': 'ceph-c0742eba-6300-5cfa-b498-a3704e14c384'})  2026-03-10 00:45:02.737668 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2', 'data_vg': 'ceph-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2'})  2026-03-10 00:45:02.737781 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:02.737813 | orchestrator | 2026-03-10 00:45:02.737840 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-10 00:45:02.737854 | orchestrator | Tuesday 10 March 2026 00:44:56 +0000 (0:00:00.189) 0:01:07.175 ********* 2026-03-10 00:45:02.737866 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0742eba-6300-5cfa-b498-a3704e14c384', 'data_vg': 'ceph-c0742eba-6300-5cfa-b498-a3704e14c384'})  2026-03-10 00:45:02.737877 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2', 'data_vg': 'ceph-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2'})  2026-03-10 00:45:02.737889 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:02.737900 | orchestrator | 2026-03-10 00:45:02.737911 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-10 00:45:02.737922 | orchestrator | Tuesday 10 March 2026 00:44:56 +0000 (0:00:00.173) 0:01:07.348 ********* 2026-03-10 00:45:02.737933 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0742eba-6300-5cfa-b498-a3704e14c384', 'data_vg': 'ceph-c0742eba-6300-5cfa-b498-a3704e14c384'})  2026-03-10 00:45:02.737944 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2', 'data_vg': 'ceph-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2'})  2026-03-10 00:45:02.737980 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:02.737992 | orchestrator | 2026-03-10 00:45:02.738002 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-10 00:45:02.738013 | orchestrator | Tuesday 10 March 2026 00:44:56 +0000 (0:00:00.164) 0:01:07.513 ********* 2026-03-10 00:45:02.738098 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:02.738109 | orchestrator | 2026-03-10 00:45:02.738120 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-10 00:45:02.738131 | orchestrator | Tuesday 10 March 2026 00:44:56 +0000 (0:00:00.141) 0:01:07.654 ********* 2026-03-10 00:45:02.738141 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:02.738152 | orchestrator | 2026-03-10 00:45:02.738163 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-10 00:45:02.738174 | orchestrator | Tuesday 10 March 2026 00:44:57 +0000 (0:00:00.142) 0:01:07.796 ********* 2026-03-10 00:45:02.738185 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:02.738199 | orchestrator | 2026-03-10 00:45:02.738212 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-10 00:45:02.738225 | orchestrator | Tuesday 10 March 2026 00:44:57 +0000 (0:00:00.140) 0:01:07.936 ********* 2026-03-10 00:45:02.738238 | orchestrator | ok: [testbed-node-5] => { 2026-03-10 00:45:02.738251 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-10 00:45:02.738264 | orchestrator | } 2026-03-10 00:45:02.738278 | orchestrator | 2026-03-10 00:45:02.738290 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-10 00:45:02.738303 | orchestrator | Tuesday 10 March 2026 00:44:57 +0000 (0:00:00.148) 0:01:08.085 ********* 2026-03-10 00:45:02.738316 | orchestrator | ok: [testbed-node-5] => { 2026-03-10 00:45:02.738329 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-10 00:45:02.738342 | orchestrator | } 2026-03-10 00:45:02.738356 | orchestrator | 2026-03-10 00:45:02.738435 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-10 00:45:02.738455 | orchestrator | Tuesday 10 March 2026 00:44:57 +0000 (0:00:00.144) 0:01:08.230 ********* 2026-03-10 00:45:02.738473 | orchestrator | ok: [testbed-node-5] => { 2026-03-10 00:45:02.738492 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-10 00:45:02.738509 | orchestrator | } 2026-03-10 00:45:02.738528 | orchestrator | 2026-03-10 00:45:02.738549 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-10 00:45:02.738567 | orchestrator | Tuesday 10 March 2026 00:44:57 +0000 (0:00:00.148) 0:01:08.378 ********* 2026-03-10 00:45:02.738586 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:45:02.738604 | orchestrator | 2026-03-10 00:45:02.738624 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-10 00:45:02.738643 | orchestrator | Tuesday 10 March 2026 00:44:58 +0000 (0:00:00.535) 0:01:08.913 ********* 2026-03-10 00:45:02.738661 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:45:02.738676 | orchestrator | 2026-03-10 00:45:02.738686 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-10 00:45:02.738697 | orchestrator | Tuesday 10 March 2026 00:44:58 +0000 (0:00:00.531) 0:01:09.445 ********* 2026-03-10 00:45:02.738708 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:45:02.738719 | orchestrator | 2026-03-10 00:45:02.738730 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-10 00:45:02.738741 | orchestrator | Tuesday 10 March 2026 00:44:59 +0000 (0:00:00.732) 0:01:10.177 ********* 2026-03-10 00:45:02.738751 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:45:02.738762 | orchestrator | 2026-03-10 00:45:02.738772 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-10 00:45:02.738783 | orchestrator | Tuesday 10 March 2026 00:44:59 +0000 (0:00:00.156) 0:01:10.334 ********* 2026-03-10 00:45:02.738794 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:02.738804 | orchestrator | 2026-03-10 00:45:02.738815 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-10 00:45:02.738838 | orchestrator | Tuesday 10 March 2026 00:44:59 +0000 (0:00:00.118) 0:01:10.452 ********* 2026-03-10 00:45:02.738849 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:02.738860 | orchestrator | 2026-03-10 00:45:02.738870 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-10 00:45:02.738881 | orchestrator | Tuesday 10 March 2026 00:44:59 +0000 (0:00:00.122) 0:01:10.575 ********* 2026-03-10 00:45:02.738892 | orchestrator | ok: [testbed-node-5] => { 2026-03-10 00:45:02.738903 | orchestrator |  "vgs_report": { 2026-03-10 00:45:02.738914 | orchestrator |  "vg": [] 2026-03-10 00:45:02.738948 | orchestrator |  } 2026-03-10 00:45:02.738960 | orchestrator | } 2026-03-10 00:45:02.738971 | orchestrator | 2026-03-10 00:45:02.738982 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-10 00:45:02.738993 | orchestrator | Tuesday 10 March 2026 00:44:59 +0000 (0:00:00.150) 0:01:10.725 ********* 2026-03-10 00:45:02.739004 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:02.739014 | orchestrator | 2026-03-10 00:45:02.739025 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-10 00:45:02.739036 | orchestrator | Tuesday 10 March 2026 00:45:00 +0000 (0:00:00.141) 0:01:10.867 ********* 2026-03-10 00:45:02.739046 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:02.739057 | orchestrator | 2026-03-10 00:45:02.739068 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-10 00:45:02.739078 | orchestrator | Tuesday 10 March 2026 00:45:00 +0000 (0:00:00.143) 0:01:11.011 ********* 2026-03-10 00:45:02.739089 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:02.739099 | orchestrator | 2026-03-10 00:45:02.739110 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-10 00:45:02.739120 | orchestrator | Tuesday 10 March 2026 00:45:00 +0000 (0:00:00.182) 0:01:11.194 ********* 2026-03-10 00:45:02.739131 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:02.739142 | orchestrator | 2026-03-10 00:45:02.739152 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-10 00:45:02.739163 | orchestrator | Tuesday 10 March 2026 00:45:00 +0000 (0:00:00.150) 0:01:11.345 ********* 2026-03-10 00:45:02.739174 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:02.739185 | orchestrator | 2026-03-10 00:45:02.739195 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-10 00:45:02.739206 | orchestrator | Tuesday 10 March 2026 00:45:00 +0000 (0:00:00.145) 0:01:11.490 ********* 2026-03-10 00:45:02.739217 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:02.739227 | orchestrator | 2026-03-10 00:45:02.739256 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-10 00:45:02.739267 | orchestrator | Tuesday 10 March 2026 00:45:00 +0000 (0:00:00.138) 0:01:11.628 ********* 2026-03-10 00:45:02.739278 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:02.739289 | orchestrator | 2026-03-10 00:45:02.739300 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-10 00:45:02.739310 | orchestrator | Tuesday 10 March 2026 00:45:00 +0000 (0:00:00.133) 0:01:11.762 ********* 2026-03-10 00:45:02.739321 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:02.739332 | orchestrator | 2026-03-10 00:45:02.739343 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-10 00:45:02.739354 | orchestrator | Tuesday 10 March 2026 00:45:01 +0000 (0:00:00.357) 0:01:12.120 ********* 2026-03-10 00:45:02.739394 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:02.739407 | orchestrator | 2026-03-10 00:45:02.739422 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-10 00:45:02.739433 | orchestrator | Tuesday 10 March 2026 00:45:01 +0000 (0:00:00.167) 0:01:12.287 ********* 2026-03-10 00:45:02.739444 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:02.739455 | orchestrator | 2026-03-10 00:45:02.739466 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-10 00:45:02.739476 | orchestrator | Tuesday 10 March 2026 00:45:01 +0000 (0:00:00.145) 0:01:12.433 ********* 2026-03-10 00:45:02.739494 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:02.739505 | orchestrator | 2026-03-10 00:45:02.739516 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-10 00:45:02.739527 | orchestrator | Tuesday 10 March 2026 00:45:01 +0000 (0:00:00.161) 0:01:12.594 ********* 2026-03-10 00:45:02.739538 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:02.739548 | orchestrator | 2026-03-10 00:45:02.739559 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-10 00:45:02.739570 | orchestrator | Tuesday 10 March 2026 00:45:01 +0000 (0:00:00.140) 0:01:12.735 ********* 2026-03-10 00:45:02.739581 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:02.739592 | orchestrator | 2026-03-10 00:45:02.739602 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-10 00:45:02.739615 | orchestrator | Tuesday 10 March 2026 00:45:02 +0000 (0:00:00.141) 0:01:12.876 ********* 2026-03-10 00:45:02.739635 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:02.739655 | orchestrator | 2026-03-10 00:45:02.739675 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-10 00:45:02.739694 | orchestrator | Tuesday 10 March 2026 00:45:02 +0000 (0:00:00.137) 0:01:13.013 ********* 2026-03-10 00:45:02.739713 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0742eba-6300-5cfa-b498-a3704e14c384', 'data_vg': 'ceph-c0742eba-6300-5cfa-b498-a3704e14c384'})  2026-03-10 00:45:02.739733 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2', 'data_vg': 'ceph-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2'})  2026-03-10 00:45:02.739754 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:02.739773 | orchestrator | 2026-03-10 00:45:02.739794 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-10 00:45:02.739815 | orchestrator | Tuesday 10 March 2026 00:45:02 +0000 (0:00:00.153) 0:01:13.167 ********* 2026-03-10 00:45:02.739834 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0742eba-6300-5cfa-b498-a3704e14c384', 'data_vg': 'ceph-c0742eba-6300-5cfa-b498-a3704e14c384'})  2026-03-10 00:45:02.739851 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2', 'data_vg': 'ceph-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2'})  2026-03-10 00:45:02.739862 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:02.739873 | orchestrator | 2026-03-10 00:45:02.739884 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-10 00:45:02.739894 | orchestrator | Tuesday 10 March 2026 00:45:02 +0000 (0:00:00.164) 0:01:13.332 ********* 2026-03-10 00:45:02.739915 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0742eba-6300-5cfa-b498-a3704e14c384', 'data_vg': 'ceph-c0742eba-6300-5cfa-b498-a3704e14c384'})  2026-03-10 00:45:05.911595 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2', 'data_vg': 'ceph-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2'})  2026-03-10 00:45:05.911697 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:05.911714 | orchestrator | 2026-03-10 00:45:05.911727 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-10 00:45:05.911740 | orchestrator | Tuesday 10 March 2026 00:45:02 +0000 (0:00:00.166) 0:01:13.499 ********* 2026-03-10 00:45:05.911752 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0742eba-6300-5cfa-b498-a3704e14c384', 'data_vg': 'ceph-c0742eba-6300-5cfa-b498-a3704e14c384'})  2026-03-10 00:45:05.911763 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2', 'data_vg': 'ceph-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2'})  2026-03-10 00:45:05.911774 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:05.911785 | orchestrator | 2026-03-10 00:45:05.911796 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-10 00:45:05.911807 | orchestrator | Tuesday 10 March 2026 00:45:02 +0000 (0:00:00.158) 0:01:13.658 ********* 2026-03-10 00:45:05.911842 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0742eba-6300-5cfa-b498-a3704e14c384', 'data_vg': 'ceph-c0742eba-6300-5cfa-b498-a3704e14c384'})  2026-03-10 00:45:05.911854 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2', 'data_vg': 'ceph-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2'})  2026-03-10 00:45:05.911866 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:05.911877 | orchestrator | 2026-03-10 00:45:05.911888 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-10 00:45:05.911899 | orchestrator | Tuesday 10 March 2026 00:45:03 +0000 (0:00:00.148) 0:01:13.806 ********* 2026-03-10 00:45:05.911909 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0742eba-6300-5cfa-b498-a3704e14c384', 'data_vg': 'ceph-c0742eba-6300-5cfa-b498-a3704e14c384'})  2026-03-10 00:45:05.911920 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2', 'data_vg': 'ceph-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2'})  2026-03-10 00:45:05.911946 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:05.911958 | orchestrator | 2026-03-10 00:45:05.911969 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-10 00:45:05.911980 | orchestrator | Tuesday 10 March 2026 00:45:03 +0000 (0:00:00.395) 0:01:14.201 ********* 2026-03-10 00:45:05.911991 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0742eba-6300-5cfa-b498-a3704e14c384', 'data_vg': 'ceph-c0742eba-6300-5cfa-b498-a3704e14c384'})  2026-03-10 00:45:05.912002 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2', 'data_vg': 'ceph-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2'})  2026-03-10 00:45:05.912013 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:05.912025 | orchestrator | 2026-03-10 00:45:05.912036 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-10 00:45:05.912047 | orchestrator | Tuesday 10 March 2026 00:45:03 +0000 (0:00:00.174) 0:01:14.376 ********* 2026-03-10 00:45:05.912058 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0742eba-6300-5cfa-b498-a3704e14c384', 'data_vg': 'ceph-c0742eba-6300-5cfa-b498-a3704e14c384'})  2026-03-10 00:45:05.912069 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2', 'data_vg': 'ceph-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2'})  2026-03-10 00:45:05.912080 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:05.912091 | orchestrator | 2026-03-10 00:45:05.912104 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-10 00:45:05.912117 | orchestrator | Tuesday 10 March 2026 00:45:03 +0000 (0:00:00.168) 0:01:14.544 ********* 2026-03-10 00:45:05.912131 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:45:05.912149 | orchestrator | 2026-03-10 00:45:05.912170 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-10 00:45:05.912189 | orchestrator | Tuesday 10 March 2026 00:45:04 +0000 (0:00:00.533) 0:01:15.078 ********* 2026-03-10 00:45:05.912210 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:45:05.912226 | orchestrator | 2026-03-10 00:45:05.912238 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-10 00:45:05.912248 | orchestrator | Tuesday 10 March 2026 00:45:04 +0000 (0:00:00.578) 0:01:15.657 ********* 2026-03-10 00:45:05.912259 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:45:05.912270 | orchestrator | 2026-03-10 00:45:05.912281 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-10 00:45:05.912291 | orchestrator | Tuesday 10 March 2026 00:45:05 +0000 (0:00:00.153) 0:01:15.810 ********* 2026-03-10 00:45:05.912302 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2', 'vg_name': 'ceph-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2'}) 2026-03-10 00:45:05.912314 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-c0742eba-6300-5cfa-b498-a3704e14c384', 'vg_name': 'ceph-c0742eba-6300-5cfa-b498-a3704e14c384'}) 2026-03-10 00:45:05.912333 | orchestrator | 2026-03-10 00:45:05.912344 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-10 00:45:05.912355 | orchestrator | Tuesday 10 March 2026 00:45:05 +0000 (0:00:00.180) 0:01:15.990 ********* 2026-03-10 00:45:05.912405 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0742eba-6300-5cfa-b498-a3704e14c384', 'data_vg': 'ceph-c0742eba-6300-5cfa-b498-a3704e14c384'})  2026-03-10 00:45:05.912417 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2', 'data_vg': 'ceph-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2'})  2026-03-10 00:45:05.912428 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:05.912439 | orchestrator | 2026-03-10 00:45:05.912450 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-10 00:45:05.912461 | orchestrator | Tuesday 10 March 2026 00:45:05 +0000 (0:00:00.183) 0:01:16.174 ********* 2026-03-10 00:45:05.912472 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0742eba-6300-5cfa-b498-a3704e14c384', 'data_vg': 'ceph-c0742eba-6300-5cfa-b498-a3704e14c384'})  2026-03-10 00:45:05.912483 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2', 'data_vg': 'ceph-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2'})  2026-03-10 00:45:05.912494 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:05.912505 | orchestrator | 2026-03-10 00:45:05.912515 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-10 00:45:05.912526 | orchestrator | Tuesday 10 March 2026 00:45:05 +0000 (0:00:00.168) 0:01:16.343 ********* 2026-03-10 00:45:05.912537 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c0742eba-6300-5cfa-b498-a3704e14c384', 'data_vg': 'ceph-c0742eba-6300-5cfa-b498-a3704e14c384'})  2026-03-10 00:45:05.912547 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2', 'data_vg': 'ceph-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2'})  2026-03-10 00:45:05.912558 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:05.912569 | orchestrator | 2026-03-10 00:45:05.912579 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-10 00:45:05.912590 | orchestrator | Tuesday 10 March 2026 00:45:05 +0000 (0:00:00.160) 0:01:16.503 ********* 2026-03-10 00:45:05.912601 | orchestrator | ok: [testbed-node-5] => { 2026-03-10 00:45:05.912612 | orchestrator |  "lvm_report": { 2026-03-10 00:45:05.912623 | orchestrator |  "lv": [ 2026-03-10 00:45:05.912634 | orchestrator |  { 2026-03-10 00:45:05.912645 | orchestrator |  "lv_name": "osd-block-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2", 2026-03-10 00:45:05.912663 | orchestrator |  "vg_name": "ceph-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2" 2026-03-10 00:45:05.912674 | orchestrator |  }, 2026-03-10 00:45:05.912685 | orchestrator |  { 2026-03-10 00:45:05.912696 | orchestrator |  "lv_name": "osd-block-c0742eba-6300-5cfa-b498-a3704e14c384", 2026-03-10 00:45:05.912707 | orchestrator |  "vg_name": "ceph-c0742eba-6300-5cfa-b498-a3704e14c384" 2026-03-10 00:45:05.912718 | orchestrator |  } 2026-03-10 00:45:05.912729 | orchestrator |  ], 2026-03-10 00:45:05.912739 | orchestrator |  "pv": [ 2026-03-10 00:45:05.912750 | orchestrator |  { 2026-03-10 00:45:05.912761 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-10 00:45:05.912772 | orchestrator |  "vg_name": "ceph-c0742eba-6300-5cfa-b498-a3704e14c384" 2026-03-10 00:45:05.912783 | orchestrator |  }, 2026-03-10 00:45:05.912793 | orchestrator |  { 2026-03-10 00:45:05.912804 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-10 00:45:05.912815 | orchestrator |  "vg_name": "ceph-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2" 2026-03-10 00:45:05.912826 | orchestrator |  } 2026-03-10 00:45:05.912836 | orchestrator |  ] 2026-03-10 00:45:05.912847 | orchestrator |  } 2026-03-10 00:45:05.912858 | orchestrator | } 2026-03-10 00:45:05.912877 | orchestrator | 2026-03-10 00:45:05.912889 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:45:05.912900 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-10 00:45:05.912910 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-10 00:45:05.912921 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-10 00:45:05.912932 | orchestrator | 2026-03-10 00:45:05.912943 | orchestrator | 2026-03-10 00:45:05.912954 | orchestrator | 2026-03-10 00:45:05.912965 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:45:05.912976 | orchestrator | Tuesday 10 March 2026 00:45:05 +0000 (0:00:00.145) 0:01:16.649 ********* 2026-03-10 00:45:05.912986 | orchestrator | =============================================================================== 2026-03-10 00:45:05.912997 | orchestrator | Create block VGs -------------------------------------------------------- 5.83s 2026-03-10 00:45:05.913008 | orchestrator | Create block LVs -------------------------------------------------------- 4.18s 2026-03-10 00:45:05.913018 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.86s 2026-03-10 00:45:05.913029 | orchestrator | Add known partitions to the list of available block devices ------------- 1.85s 2026-03-10 00:45:05.913040 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.76s 2026-03-10 00:45:05.913051 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.67s 2026-03-10 00:45:05.913061 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.63s 2026-03-10 00:45:05.913072 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.55s 2026-03-10 00:45:05.913090 | orchestrator | Add known links to the list of available block devices ------------------ 1.37s 2026-03-10 00:45:06.353788 | orchestrator | Add known partitions to the list of available block devices ------------- 1.12s 2026-03-10 00:45:06.353959 | orchestrator | Add known partitions to the list of available block devices ------------- 1.01s 2026-03-10 00:45:06.353987 | orchestrator | Print LVM report data --------------------------------------------------- 1.00s 2026-03-10 00:45:06.354008 | orchestrator | Add known links to the list of available block devices ------------------ 0.89s 2026-03-10 00:45:06.354102 | orchestrator | Print 'Create WAL VGs' -------------------------------------------------- 0.80s 2026-03-10 00:45:06.354123 | orchestrator | Get initial list of available block devices ----------------------------- 0.76s 2026-03-10 00:45:06.354143 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.75s 2026-03-10 00:45:06.354164 | orchestrator | Add known partitions to the list of available block devices ------------- 0.75s 2026-03-10 00:45:06.354183 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2026-03-10 00:45:06.354204 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.74s 2026-03-10 00:45:06.354223 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.73s 2026-03-10 00:45:18.862447 | orchestrator | 2026-03-10 00:45:18 | INFO  | Task 11692bd3-d2f1-4b65-ad69-2fae65a8b7cf (facts) was prepared for execution. 2026-03-10 00:45:18.862554 | orchestrator | 2026-03-10 00:45:18 | INFO  | It takes a moment until task 11692bd3-d2f1-4b65-ad69-2fae65a8b7cf (facts) has been started and output is visible here. 2026-03-10 00:45:31.469740 | orchestrator | 2026-03-10 00:45:31.469870 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-10 00:45:31.469899 | orchestrator | 2026-03-10 00:45:31.469920 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-10 00:45:31.469942 | orchestrator | Tuesday 10 March 2026 00:45:23 +0000 (0:00:00.308) 0:00:00.308 ********* 2026-03-10 00:45:31.470000 | orchestrator | ok: [testbed-manager] 2026-03-10 00:45:31.470072 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:45:31.470085 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:45:31.470095 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:45:31.470106 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:45:31.470117 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:45:31.470127 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:45:31.470138 | orchestrator | 2026-03-10 00:45:31.470149 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-10 00:45:31.470175 | orchestrator | Tuesday 10 March 2026 00:45:24 +0000 (0:00:01.155) 0:00:01.464 ********* 2026-03-10 00:45:31.470187 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:45:31.470199 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:45:31.470209 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:45:31.470220 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:45:31.470230 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:45:31.470247 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:45:31.470266 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:31.470286 | orchestrator | 2026-03-10 00:45:31.470308 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-10 00:45:31.470330 | orchestrator | 2026-03-10 00:45:31.470372 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-10 00:45:31.470387 | orchestrator | Tuesday 10 March 2026 00:45:25 +0000 (0:00:01.168) 0:00:02.632 ********* 2026-03-10 00:45:31.470399 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:45:31.470411 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:45:31.470423 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:45:31.470436 | orchestrator | ok: [testbed-manager] 2026-03-10 00:45:31.470448 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:45:31.470460 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:45:31.470472 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:45:31.470485 | orchestrator | 2026-03-10 00:45:31.470497 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-10 00:45:31.470509 | orchestrator | 2026-03-10 00:45:31.470521 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-10 00:45:31.470534 | orchestrator | Tuesday 10 March 2026 00:45:30 +0000 (0:00:04.832) 0:00:07.464 ********* 2026-03-10 00:45:31.470546 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:45:31.470558 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:45:31.470571 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:45:31.470584 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:45:31.470595 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:45:31.470607 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:45:31.470620 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:31.470632 | orchestrator | 2026-03-10 00:45:31.470643 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:45:31.470654 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:45:31.470667 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:45:31.470678 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:45:31.470689 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:45:31.470699 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:45:31.470710 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:45:31.470721 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:45:31.470746 | orchestrator | 2026-03-10 00:45:31.470765 | orchestrator | 2026-03-10 00:45:31.470783 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:45:31.470802 | orchestrator | Tuesday 10 March 2026 00:45:31 +0000 (0:00:00.580) 0:00:08.045 ********* 2026-03-10 00:45:31.470820 | orchestrator | =============================================================================== 2026-03-10 00:45:31.470835 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.83s 2026-03-10 00:45:31.470851 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.17s 2026-03-10 00:45:31.470868 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.16s 2026-03-10 00:45:31.470886 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.58s 2026-03-10 00:45:44.002532 | orchestrator | 2026-03-10 00:45:43 | INFO  | Task 21943d63-2c70-4bca-928f-4adad76a4f30 (frr) was prepared for execution. 2026-03-10 00:45:44.002636 | orchestrator | 2026-03-10 00:45:43 | INFO  | It takes a moment until task 21943d63-2c70-4bca-928f-4adad76a4f30 (frr) has been started and output is visible here. 2026-03-10 00:46:10.502371 | orchestrator | 2026-03-10 00:46:10.502528 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-10 00:46:10.502556 | orchestrator | 2026-03-10 00:46:10.502576 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-10 00:46:10.502597 | orchestrator | Tuesday 10 March 2026 00:45:48 +0000 (0:00:00.213) 0:00:00.213 ********* 2026-03-10 00:46:10.502618 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-10 00:46:10.502640 | orchestrator | 2026-03-10 00:46:10.502660 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-10 00:46:10.502681 | orchestrator | Tuesday 10 March 2026 00:45:48 +0000 (0:00:00.230) 0:00:00.444 ********* 2026-03-10 00:46:10.502703 | orchestrator | changed: [testbed-manager] 2026-03-10 00:46:10.502724 | orchestrator | 2026-03-10 00:46:10.502745 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-10 00:46:10.502766 | orchestrator | Tuesday 10 March 2026 00:45:49 +0000 (0:00:01.112) 0:00:01.557 ********* 2026-03-10 00:46:10.502786 | orchestrator | changed: [testbed-manager] 2026-03-10 00:46:10.502805 | orchestrator | 2026-03-10 00:46:10.502818 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-10 00:46:10.502829 | orchestrator | Tuesday 10 March 2026 00:45:58 +0000 (0:00:09.462) 0:00:11.020 ********* 2026-03-10 00:46:10.502843 | orchestrator | ok: [testbed-manager] 2026-03-10 00:46:10.502856 | orchestrator | 2026-03-10 00:46:10.502868 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-10 00:46:10.502880 | orchestrator | Tuesday 10 March 2026 00:46:00 +0000 (0:00:01.089) 0:00:12.109 ********* 2026-03-10 00:46:10.502892 | orchestrator | changed: [testbed-manager] 2026-03-10 00:46:10.502904 | orchestrator | 2026-03-10 00:46:10.502916 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-10 00:46:10.502929 | orchestrator | Tuesday 10 March 2026 00:46:01 +0000 (0:00:01.036) 0:00:13.146 ********* 2026-03-10 00:46:10.502941 | orchestrator | ok: [testbed-manager] 2026-03-10 00:46:10.502953 | orchestrator | 2026-03-10 00:46:10.502966 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-10 00:46:10.502979 | orchestrator | Tuesday 10 March 2026 00:46:02 +0000 (0:00:01.188) 0:00:14.335 ********* 2026-03-10 00:46:10.502990 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:46:10.503002 | orchestrator | 2026-03-10 00:46:10.503014 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-10 00:46:10.503027 | orchestrator | Tuesday 10 March 2026 00:46:02 +0000 (0:00:00.137) 0:00:14.472 ********* 2026-03-10 00:46:10.503061 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:46:10.503168 | orchestrator | 2026-03-10 00:46:10.503184 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-10 00:46:10.503196 | orchestrator | Tuesday 10 March 2026 00:46:02 +0000 (0:00:00.164) 0:00:14.636 ********* 2026-03-10 00:46:10.503209 | orchestrator | changed: [testbed-manager] 2026-03-10 00:46:10.503219 | orchestrator | 2026-03-10 00:46:10.503230 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-10 00:46:10.503241 | orchestrator | Tuesday 10 March 2026 00:46:03 +0000 (0:00:01.005) 0:00:15.642 ********* 2026-03-10 00:46:10.503252 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-10 00:46:10.503262 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-10 00:46:10.503274 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-10 00:46:10.503285 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-10 00:46:10.503296 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-10 00:46:10.503307 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-10 00:46:10.503344 | orchestrator | 2026-03-10 00:46:10.503356 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-10 00:46:10.503367 | orchestrator | Tuesday 10 March 2026 00:46:06 +0000 (0:00:03.323) 0:00:18.966 ********* 2026-03-10 00:46:10.503378 | orchestrator | ok: [testbed-manager] 2026-03-10 00:46:10.503388 | orchestrator | 2026-03-10 00:46:10.503399 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-10 00:46:10.503410 | orchestrator | Tuesday 10 March 2026 00:46:08 +0000 (0:00:01.683) 0:00:20.650 ********* 2026-03-10 00:46:10.503420 | orchestrator | changed: [testbed-manager] 2026-03-10 00:46:10.503431 | orchestrator | 2026-03-10 00:46:10.503442 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:46:10.503453 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:46:10.503464 | orchestrator | 2026-03-10 00:46:10.503475 | orchestrator | 2026-03-10 00:46:10.503485 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:46:10.503497 | orchestrator | Tuesday 10 March 2026 00:46:10 +0000 (0:00:01.607) 0:00:22.257 ********* 2026-03-10 00:46:10.503507 | orchestrator | =============================================================================== 2026-03-10 00:46:10.503518 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.46s 2026-03-10 00:46:10.503529 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.32s 2026-03-10 00:46:10.503540 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.68s 2026-03-10 00:46:10.503550 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.61s 2026-03-10 00:46:10.503561 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.19s 2026-03-10 00:46:10.503595 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.11s 2026-03-10 00:46:10.503607 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.09s 2026-03-10 00:46:10.503618 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.04s 2026-03-10 00:46:10.503629 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.01s 2026-03-10 00:46:10.503639 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.23s 2026-03-10 00:46:10.503650 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.16s 2026-03-10 00:46:10.503661 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.14s 2026-03-10 00:46:10.841419 | orchestrator | 2026-03-10 00:46:10.844522 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Tue Mar 10 00:46:10 UTC 2026 2026-03-10 00:46:10.844604 | orchestrator | 2026-03-10 00:46:12.766427 | orchestrator | 2026-03-10 00:46:12 | INFO  | Collection nutshell is prepared for execution 2026-03-10 00:46:12.766551 | orchestrator | 2026-03-10 00:46:12 | INFO  | A [0] - dotfiles 2026-03-10 00:46:22.797134 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [0] - homer 2026-03-10 00:46:22.797248 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [0] - netdata 2026-03-10 00:46:22.797267 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [0] - openstackclient 2026-03-10 00:46:22.797282 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [0] - phpmyadmin 2026-03-10 00:46:22.797301 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [0] - common 2026-03-10 00:46:22.804216 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [1] -- loadbalancer 2026-03-10 00:46:22.804336 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [2] --- opensearch 2026-03-10 00:46:22.804365 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [2] --- mariadb-ng 2026-03-10 00:46:22.804465 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [3] ---- horizon 2026-03-10 00:46:22.805176 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [3] ---- keystone 2026-03-10 00:46:22.805435 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [4] ----- neutron 2026-03-10 00:46:22.805637 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [5] ------ wait-for-nova 2026-03-10 00:46:22.805921 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [6] ------- octavia 2026-03-10 00:46:22.807725 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [4] ----- barbican 2026-03-10 00:46:22.807892 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [4] ----- designate 2026-03-10 00:46:22.808342 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [4] ----- ironic 2026-03-10 00:46:22.808367 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [4] ----- placement 2026-03-10 00:46:22.808727 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [4] ----- magnum 2026-03-10 00:46:22.809708 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [1] -- openvswitch 2026-03-10 00:46:22.809790 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [2] --- ovn 2026-03-10 00:46:22.810213 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [1] -- memcached 2026-03-10 00:46:22.810534 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [1] -- redis 2026-03-10 00:46:22.810560 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [1] -- rabbitmq-ng 2026-03-10 00:46:22.811066 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [0] - kubernetes 2026-03-10 00:46:22.815203 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [1] -- kubeconfig 2026-03-10 00:46:22.815247 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [1] -- copy-kubeconfig 2026-03-10 00:46:22.815262 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [0] - ceph 2026-03-10 00:46:22.817528 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [1] -- ceph-pools 2026-03-10 00:46:22.817917 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [2] --- copy-ceph-keys 2026-03-10 00:46:22.817943 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [3] ---- cephclient 2026-03-10 00:46:22.817954 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-03-10 00:46:22.817966 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [4] ----- wait-for-keystone 2026-03-10 00:46:22.818419 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [5] ------ kolla-ceph-rgw 2026-03-10 00:46:22.818453 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [5] ------ glance 2026-03-10 00:46:22.818657 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [5] ------ cinder 2026-03-10 00:46:22.818732 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [5] ------ nova 2026-03-10 00:46:22.819465 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [4] ----- prometheus 2026-03-10 00:46:22.819511 | orchestrator | 2026-03-10 00:46:22 | INFO  | A [5] ------ grafana 2026-03-10 00:46:23.048889 | orchestrator | 2026-03-10 00:46:23 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-03-10 00:46:23.048984 | orchestrator | 2026-03-10 00:46:23 | INFO  | Tasks are running in the background 2026-03-10 00:46:26.562905 | orchestrator | 2026-03-10 00:46:26 | INFO  | No task IDs specified, wait for all currently running tasks 2026-03-10 00:46:28.693351 | orchestrator | 2026-03-10 00:46:28 | INFO  | Task fd114393-7ddc-46c3-a460-d09b2006098e is in state STARTED 2026-03-10 00:46:28.693564 | orchestrator | 2026-03-10 00:46:28 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:46:28.694107 | orchestrator | 2026-03-10 00:46:28 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:46:28.696188 | orchestrator | 2026-03-10 00:46:28 | INFO  | Task 59995611-0857-4b01-a422-9c508a97d391 is in state STARTED 2026-03-10 00:46:28.696647 | orchestrator | 2026-03-10 00:46:28 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:46:28.697180 | orchestrator | 2026-03-10 00:46:28 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:46:28.697795 | orchestrator | 2026-03-10 00:46:28 | INFO  | Task 01aa4a0d-8a91-4509-b50a-24871c6e8439 is in state STARTED 2026-03-10 00:46:28.700737 | orchestrator | 2026-03-10 00:46:28 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:46:31.748171 | orchestrator | 2026-03-10 00:46:31 | INFO  | Task fd114393-7ddc-46c3-a460-d09b2006098e is in state STARTED 2026-03-10 00:46:31.748274 | orchestrator | 2026-03-10 00:46:31 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:46:31.748338 | orchestrator | 2026-03-10 00:46:31 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:46:31.748360 | orchestrator | 2026-03-10 00:46:31 | INFO  | Task 59995611-0857-4b01-a422-9c508a97d391 is in state STARTED 2026-03-10 00:46:31.748378 | orchestrator | 2026-03-10 00:46:31 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:46:31.748397 | orchestrator | 2026-03-10 00:46:31 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:46:31.748415 | orchestrator | 2026-03-10 00:46:31 | INFO  | Task 01aa4a0d-8a91-4509-b50a-24871c6e8439 is in state STARTED 2026-03-10 00:46:31.748435 | orchestrator | 2026-03-10 00:46:31 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:46:34.764050 | orchestrator | 2026-03-10 00:46:34 | INFO  | Task fd114393-7ddc-46c3-a460-d09b2006098e is in state STARTED 2026-03-10 00:46:34.764712 | orchestrator | 2026-03-10 00:46:34 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:46:34.764956 | orchestrator | 2026-03-10 00:46:34 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:46:34.767038 | orchestrator | 2026-03-10 00:46:34 | INFO  | Task 59995611-0857-4b01-a422-9c508a97d391 is in state STARTED 2026-03-10 00:46:34.767556 | orchestrator | 2026-03-10 00:46:34 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:46:34.768073 | orchestrator | 2026-03-10 00:46:34 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:46:34.768769 | orchestrator | 2026-03-10 00:46:34 | INFO  | Task 01aa4a0d-8a91-4509-b50a-24871c6e8439 is in state STARTED 2026-03-10 00:46:34.768821 | orchestrator | 2026-03-10 00:46:34 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:46:37.815874 | orchestrator | 2026-03-10 00:46:37 | INFO  | Task fd114393-7ddc-46c3-a460-d09b2006098e is in state STARTED 2026-03-10 00:46:37.820199 | orchestrator | 2026-03-10 00:46:37 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:46:37.820923 | orchestrator | 2026-03-10 00:46:37 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:46:37.821641 | orchestrator | 2026-03-10 00:46:37 | INFO  | Task 59995611-0857-4b01-a422-9c508a97d391 is in state STARTED 2026-03-10 00:46:37.822486 | orchestrator | 2026-03-10 00:46:37 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:46:37.822957 | orchestrator | 2026-03-10 00:46:37 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:46:37.823613 | orchestrator | 2026-03-10 00:46:37 | INFO  | Task 01aa4a0d-8a91-4509-b50a-24871c6e8439 is in state STARTED 2026-03-10 00:46:37.824099 | orchestrator | 2026-03-10 00:46:37 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:46:41.115701 | orchestrator | 2026-03-10 00:46:40 | INFO  | Task fd114393-7ddc-46c3-a460-d09b2006098e is in state STARTED 2026-03-10 00:46:41.115760 | orchestrator | 2026-03-10 00:46:40 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:46:41.115770 | orchestrator | 2026-03-10 00:46:41 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:46:41.115776 | orchestrator | 2026-03-10 00:46:41 | INFO  | Task 59995611-0857-4b01-a422-9c508a97d391 is in state STARTED 2026-03-10 00:46:41.115783 | orchestrator | 2026-03-10 00:46:41 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:46:41.115790 | orchestrator | 2026-03-10 00:46:41 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:46:41.115797 | orchestrator | 2026-03-10 00:46:41 | INFO  | Task 01aa4a0d-8a91-4509-b50a-24871c6e8439 is in state STARTED 2026-03-10 00:46:41.115803 | orchestrator | 2026-03-10 00:46:41 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:46:44.094381 | orchestrator | 2026-03-10 00:46:44 | INFO  | Task fd114393-7ddc-46c3-a460-d09b2006098e is in state STARTED 2026-03-10 00:46:44.094451 | orchestrator | 2026-03-10 00:46:44 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:46:44.094460 | orchestrator | 2026-03-10 00:46:44 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:46:44.094467 | orchestrator | 2026-03-10 00:46:44 | INFO  | Task 59995611-0857-4b01-a422-9c508a97d391 is in state STARTED 2026-03-10 00:46:44.094474 | orchestrator | 2026-03-10 00:46:44 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:46:44.094488 | orchestrator | 2026-03-10 00:46:44 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:46:44.094495 | orchestrator | 2026-03-10 00:46:44 | INFO  | Task 01aa4a0d-8a91-4509-b50a-24871c6e8439 is in state STARTED 2026-03-10 00:46:44.094503 | orchestrator | 2026-03-10 00:46:44 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:46:47.208878 | orchestrator | 2026-03-10 00:46:47 | INFO  | Task fd114393-7ddc-46c3-a460-d09b2006098e is in state STARTED 2026-03-10 00:46:47.209509 | orchestrator | 2026-03-10 00:46:47 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:46:47.213422 | orchestrator | 2026-03-10 00:46:47 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:46:47.216921 | orchestrator | 2026-03-10 00:46:47 | INFO  | Task 59995611-0857-4b01-a422-9c508a97d391 is in state STARTED 2026-03-10 00:46:47.220595 | orchestrator | 2026-03-10 00:46:47 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:46:47.221895 | orchestrator | 2026-03-10 00:46:47 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:46:47.223346 | orchestrator | 2026-03-10 00:46:47 | INFO  | Task 01aa4a0d-8a91-4509-b50a-24871c6e8439 is in state STARTED 2026-03-10 00:46:47.223395 | orchestrator | 2026-03-10 00:46:47 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:46:50.421076 | orchestrator | 2026-03-10 00:46:50 | INFO  | Task fd114393-7ddc-46c3-a460-d09b2006098e is in state STARTED 2026-03-10 00:46:50.422973 | orchestrator | 2026-03-10 00:46:50 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:46:50.424587 | orchestrator | 2026-03-10 00:46:50 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:46:50.425274 | orchestrator | 2026-03-10 00:46:50 | INFO  | Task 59995611-0857-4b01-a422-9c508a97d391 is in state STARTED 2026-03-10 00:46:50.429543 | orchestrator | 2026-03-10 00:46:50 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:46:50.431430 | orchestrator | 2026-03-10 00:46:50 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:46:50.431997 | orchestrator | 2026-03-10 00:46:50 | INFO  | Task 01aa4a0d-8a91-4509-b50a-24871c6e8439 is in state STARTED 2026-03-10 00:46:50.432038 | orchestrator | 2026-03-10 00:46:50 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:46:53.511773 | orchestrator | 2026-03-10 00:46:53.511865 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-03-10 00:46:53.511889 | orchestrator | 2026-03-10 00:46:53.511909 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-03-10 00:46:53.511940 | orchestrator | Tuesday 10 March 2026 00:46:36 +0000 (0:00:00.596) 0:00:00.596 ********* 2026-03-10 00:46:53.511958 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:46:53.511976 | orchestrator | changed: [testbed-manager] 2026-03-10 00:46:53.511993 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:46:53.512009 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:46:53.512026 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:46:53.512043 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:46:53.512060 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:46:53.512076 | orchestrator | 2026-03-10 00:46:53.512093 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-03-10 00:46:53.512110 | orchestrator | Tuesday 10 March 2026 00:46:40 +0000 (0:00:04.141) 0:00:04.737 ********* 2026-03-10 00:46:53.512128 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-10 00:46:53.512147 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-10 00:46:53.512165 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-10 00:46:53.512182 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-10 00:46:53.512200 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-10 00:46:53.512218 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-10 00:46:53.512237 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-10 00:46:53.512310 | orchestrator | 2026-03-10 00:46:53.512332 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-03-10 00:46:53.512351 | orchestrator | Tuesday 10 March 2026 00:46:42 +0000 (0:00:02.209) 0:00:06.947 ********* 2026-03-10 00:46:53.512391 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-10 00:46:41.251089', 'end': '2026-03-10 00:46:41.257122', 'delta': '0:00:00.006033', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-10 00:46:53.512438 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-10 00:46:41.064392', 'end': '2026-03-10 00:46:42.070635', 'delta': '0:00:01.006243', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-10 00:46:53.512454 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-10 00:46:41.052610', 'end': '2026-03-10 00:46:41.061734', 'delta': '0:00:00.009124', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-10 00:46:53.512496 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-10 00:46:41.179275', 'end': '2026-03-10 00:46:41.182984', 'delta': '0:00:00.003709', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-10 00:46:53.512511 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-10 00:46:41.706597', 'end': '2026-03-10 00:46:41.715727', 'delta': '0:00:00.009130', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-10 00:46:53.512524 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-10 00:46:41.428649', 'end': '2026-03-10 00:46:41.436501', 'delta': '0:00:00.007852', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-10 00:46:53.512818 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-10 00:46:42.061249', 'end': '2026-03-10 00:46:42.069028', 'delta': '0:00:00.007779', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-10 00:46:53.512833 | orchestrator | 2026-03-10 00:46:53.512844 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-03-10 00:46:53.512855 | orchestrator | Tuesday 10 March 2026 00:46:44 +0000 (0:00:02.567) 0:00:09.514 ********* 2026-03-10 00:46:53.512866 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-10 00:46:53.512877 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-10 00:46:53.512887 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-10 00:46:53.512898 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-10 00:46:53.512909 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-10 00:46:53.512919 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-10 00:46:53.512930 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-10 00:46:53.512940 | orchestrator | 2026-03-10 00:46:53.512951 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-03-10 00:46:53.512962 | orchestrator | Tuesday 10 March 2026 00:46:46 +0000 (0:00:01.301) 0:00:10.816 ********* 2026-03-10 00:46:53.512973 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-03-10 00:46:53.512984 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-03-10 00:46:53.512995 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-03-10 00:46:53.513005 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-03-10 00:46:53.513016 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-03-10 00:46:53.513026 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-03-10 00:46:53.513037 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-03-10 00:46:53.513048 | orchestrator | 2026-03-10 00:46:53.513059 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:46:53.513079 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:46:53.513091 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:46:53.513102 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:46:53.513113 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:46:53.513133 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:46:53.513143 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:46:53.513159 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:46:53.513178 | orchestrator | 2026-03-10 00:46:53.513196 | orchestrator | 2026-03-10 00:46:53.513214 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:46:53.513231 | orchestrator | Tuesday 10 March 2026 00:46:50 +0000 (0:00:03.885) 0:00:14.702 ********* 2026-03-10 00:46:53.513250 | orchestrator | =============================================================================== 2026-03-10 00:46:53.513269 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.14s 2026-03-10 00:46:53.513379 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.88s 2026-03-10 00:46:53.513391 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.57s 2026-03-10 00:46:53.513403 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.21s 2026-03-10 00:46:53.513413 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.31s 2026-03-10 00:46:53.513424 | orchestrator | 2026-03-10 00:46:53 | INFO  | Task fd114393-7ddc-46c3-a460-d09b2006098e is in state SUCCESS 2026-03-10 00:46:53.529925 | orchestrator | 2026-03-10 00:46:53 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:46:53.530008 | orchestrator | 2026-03-10 00:46:53 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:46:53.538246 | orchestrator | 2026-03-10 00:46:53 | INFO  | Task 59995611-0857-4b01-a422-9c508a97d391 is in state STARTED 2026-03-10 00:46:53.538373 | orchestrator | 2026-03-10 00:46:53 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:46:53.559557 | orchestrator | 2026-03-10 00:46:53 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:46:53.559611 | orchestrator | 2026-03-10 00:46:53 | INFO  | Task 4e016ceb-7ff7-4b93-86a4-a232bd3e1e9d is in state STARTED 2026-03-10 00:46:53.559620 | orchestrator | 2026-03-10 00:46:53 | INFO  | Task 01aa4a0d-8a91-4509-b50a-24871c6e8439 is in state STARTED 2026-03-10 00:46:53.559627 | orchestrator | 2026-03-10 00:46:53 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:46:56.764263 | orchestrator | 2026-03-10 00:46:56 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:46:56.766964 | orchestrator | 2026-03-10 00:46:56 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:46:56.885525 | orchestrator | 2026-03-10 00:46:56 | INFO  | Task 59995611-0857-4b01-a422-9c508a97d391 is in state STARTED 2026-03-10 00:46:56.885610 | orchestrator | 2026-03-10 00:46:56 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:46:57.010866 | orchestrator | 2026-03-10 00:46:56 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:46:57.010966 | orchestrator | 2026-03-10 00:46:56 | INFO  | Task 4e016ceb-7ff7-4b93-86a4-a232bd3e1e9d is in state STARTED 2026-03-10 00:46:57.010981 | orchestrator | 2026-03-10 00:46:56 | INFO  | Task 01aa4a0d-8a91-4509-b50a-24871c6e8439 is in state STARTED 2026-03-10 00:46:57.010994 | orchestrator | 2026-03-10 00:46:56 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:47:00.178672 | orchestrator | 2026-03-10 00:47:00 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:47:00.178761 | orchestrator | 2026-03-10 00:47:00 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:47:00.178773 | orchestrator | 2026-03-10 00:47:00 | INFO  | Task 59995611-0857-4b01-a422-9c508a97d391 is in state STARTED 2026-03-10 00:47:00.178782 | orchestrator | 2026-03-10 00:47:00 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:47:00.178994 | orchestrator | 2026-03-10 00:47:00 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:47:00.179795 | orchestrator | 2026-03-10 00:47:00 | INFO  | Task 4e016ceb-7ff7-4b93-86a4-a232bd3e1e9d is in state STARTED 2026-03-10 00:47:00.180771 | orchestrator | 2026-03-10 00:47:00 | INFO  | Task 01aa4a0d-8a91-4509-b50a-24871c6e8439 is in state STARTED 2026-03-10 00:47:00.180795 | orchestrator | 2026-03-10 00:47:00 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:47:03.281675 | orchestrator | 2026-03-10 00:47:03 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:47:03.281780 | orchestrator | 2026-03-10 00:47:03 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:47:03.281794 | orchestrator | 2026-03-10 00:47:03 | INFO  | Task 59995611-0857-4b01-a422-9c508a97d391 is in state STARTED 2026-03-10 00:47:03.281824 | orchestrator | 2026-03-10 00:47:03 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:47:03.281837 | orchestrator | 2026-03-10 00:47:03 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:47:03.283463 | orchestrator | 2026-03-10 00:47:03 | INFO  | Task 4e016ceb-7ff7-4b93-86a4-a232bd3e1e9d is in state STARTED 2026-03-10 00:47:03.284927 | orchestrator | 2026-03-10 00:47:03 | INFO  | Task 01aa4a0d-8a91-4509-b50a-24871c6e8439 is in state STARTED 2026-03-10 00:47:03.284976 | orchestrator | 2026-03-10 00:47:03 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:47:06.426964 | orchestrator | 2026-03-10 00:47:06 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:47:06.444648 | orchestrator | 2026-03-10 00:47:06 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:47:06.444751 | orchestrator | 2026-03-10 00:47:06 | INFO  | Task 59995611-0857-4b01-a422-9c508a97d391 is in state STARTED 2026-03-10 00:47:06.444766 | orchestrator | 2026-03-10 00:47:06 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:47:06.444777 | orchestrator | 2026-03-10 00:47:06 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:47:06.444788 | orchestrator | 2026-03-10 00:47:06 | INFO  | Task 4e016ceb-7ff7-4b93-86a4-a232bd3e1e9d is in state STARTED 2026-03-10 00:47:06.444799 | orchestrator | 2026-03-10 00:47:06 | INFO  | Task 01aa4a0d-8a91-4509-b50a-24871c6e8439 is in state STARTED 2026-03-10 00:47:06.444811 | orchestrator | 2026-03-10 00:47:06 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:47:09.638440 | orchestrator | 2026-03-10 00:47:09 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:47:09.638555 | orchestrator | 2026-03-10 00:47:09 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:47:09.638571 | orchestrator | 2026-03-10 00:47:09 | INFO  | Task 59995611-0857-4b01-a422-9c508a97d391 is in state STARTED 2026-03-10 00:47:09.638583 | orchestrator | 2026-03-10 00:47:09 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:47:09.638619 | orchestrator | 2026-03-10 00:47:09 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:47:09.638631 | orchestrator | 2026-03-10 00:47:09 | INFO  | Task 4e016ceb-7ff7-4b93-86a4-a232bd3e1e9d is in state STARTED 2026-03-10 00:47:09.638641 | orchestrator | 2026-03-10 00:47:09 | INFO  | Task 01aa4a0d-8a91-4509-b50a-24871c6e8439 is in state STARTED 2026-03-10 00:47:09.638652 | orchestrator | 2026-03-10 00:47:09 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:47:12.741959 | orchestrator | 2026-03-10 00:47:12 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:47:12.742040 | orchestrator | 2026-03-10 00:47:12 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:47:12.742048 | orchestrator | 2026-03-10 00:47:12 | INFO  | Task 59995611-0857-4b01-a422-9c508a97d391 is in state STARTED 2026-03-10 00:47:12.742053 | orchestrator | 2026-03-10 00:47:12 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:47:12.742058 | orchestrator | 2026-03-10 00:47:12 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:47:12.742063 | orchestrator | 2026-03-10 00:47:12 | INFO  | Task 4e016ceb-7ff7-4b93-86a4-a232bd3e1e9d is in state STARTED 2026-03-10 00:47:12.742067 | orchestrator | 2026-03-10 00:47:12 | INFO  | Task 01aa4a0d-8a91-4509-b50a-24871c6e8439 is in state STARTED 2026-03-10 00:47:12.742072 | orchestrator | 2026-03-10 00:47:12 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:47:15.768520 | orchestrator | 2026-03-10 00:47:15 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:47:15.768596 | orchestrator | 2026-03-10 00:47:15 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:47:15.768602 | orchestrator | 2026-03-10 00:47:15 | INFO  | Task 59995611-0857-4b01-a422-9c508a97d391 is in state STARTED 2026-03-10 00:47:15.768607 | orchestrator | 2026-03-10 00:47:15 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:47:15.768612 | orchestrator | 2026-03-10 00:47:15 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:47:15.768616 | orchestrator | 2026-03-10 00:47:15 | INFO  | Task 4e016ceb-7ff7-4b93-86a4-a232bd3e1e9d is in state STARTED 2026-03-10 00:47:15.768633 | orchestrator | 2026-03-10 00:47:15 | INFO  | Task 01aa4a0d-8a91-4509-b50a-24871c6e8439 is in state STARTED 2026-03-10 00:47:15.768638 | orchestrator | 2026-03-10 00:47:15 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:47:18.943090 | orchestrator | 2026-03-10 00:47:18 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:47:18.943181 | orchestrator | 2026-03-10 00:47:18 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:47:18.943195 | orchestrator | 2026-03-10 00:47:18 | INFO  | Task 59995611-0857-4b01-a422-9c508a97d391 is in state STARTED 2026-03-10 00:47:18.943206 | orchestrator | 2026-03-10 00:47:18 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:47:18.943215 | orchestrator | 2026-03-10 00:47:18 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:47:18.943224 | orchestrator | 2026-03-10 00:47:18 | INFO  | Task 4e016ceb-7ff7-4b93-86a4-a232bd3e1e9d is in state STARTED 2026-03-10 00:47:18.956744 | orchestrator | 2026-03-10 00:47:18 | INFO  | Task 01aa4a0d-8a91-4509-b50a-24871c6e8439 is in state STARTED 2026-03-10 00:47:18.956825 | orchestrator | 2026-03-10 00:47:18 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:47:22.095217 | orchestrator | 2026-03-10 00:47:22 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:47:22.095392 | orchestrator | 2026-03-10 00:47:22 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:47:22.095413 | orchestrator | 2026-03-10 00:47:22 | INFO  | Task 59995611-0857-4b01-a422-9c508a97d391 is in state SUCCESS 2026-03-10 00:47:22.095431 | orchestrator | 2026-03-10 00:47:22 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:47:22.095443 | orchestrator | 2026-03-10 00:47:22 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:47:22.095455 | orchestrator | 2026-03-10 00:47:22 | INFO  | Task 4e016ceb-7ff7-4b93-86a4-a232bd3e1e9d is in state STARTED 2026-03-10 00:47:22.095467 | orchestrator | 2026-03-10 00:47:22 | INFO  | Task 01aa4a0d-8a91-4509-b50a-24871c6e8439 is in state STARTED 2026-03-10 00:47:22.095479 | orchestrator | 2026-03-10 00:47:22 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:47:25.151829 | orchestrator | 2026-03-10 00:47:25 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:47:25.151917 | orchestrator | 2026-03-10 00:47:25 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:47:25.151932 | orchestrator | 2026-03-10 00:47:25 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:47:25.151943 | orchestrator | 2026-03-10 00:47:25 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:47:25.151955 | orchestrator | 2026-03-10 00:47:25 | INFO  | Task 4e016ceb-7ff7-4b93-86a4-a232bd3e1e9d is in state STARTED 2026-03-10 00:47:25.156233 | orchestrator | 2026-03-10 00:47:25 | INFO  | Task 01aa4a0d-8a91-4509-b50a-24871c6e8439 is in state STARTED 2026-03-10 00:47:25.156395 | orchestrator | 2026-03-10 00:47:25 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:47:28.203232 | orchestrator | 2026-03-10 00:47:28 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:47:28.205854 | orchestrator | 2026-03-10 00:47:28 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:47:28.209130 | orchestrator | 2026-03-10 00:47:28 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:47:28.209783 | orchestrator | 2026-03-10 00:47:28 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:47:28.212440 | orchestrator | 2026-03-10 00:47:28 | INFO  | Task 4e016ceb-7ff7-4b93-86a4-a232bd3e1e9d is in state STARTED 2026-03-10 00:47:28.214205 | orchestrator | 2026-03-10 00:47:28 | INFO  | Task 01aa4a0d-8a91-4509-b50a-24871c6e8439 is in state STARTED 2026-03-10 00:47:28.214313 | orchestrator | 2026-03-10 00:47:28 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:47:31.278746 | orchestrator | 2026-03-10 00:47:31 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:47:31.281585 | orchestrator | 2026-03-10 00:47:31 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:47:31.286864 | orchestrator | 2026-03-10 00:47:31 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:47:31.290397 | orchestrator | 2026-03-10 00:47:31 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:47:31.306778 | orchestrator | 2026-03-10 00:47:31 | INFO  | Task 4e016ceb-7ff7-4b93-86a4-a232bd3e1e9d is in state STARTED 2026-03-10 00:47:31.313698 | orchestrator | 2026-03-10 00:47:31 | INFO  | Task 01aa4a0d-8a91-4509-b50a-24871c6e8439 is in state SUCCESS 2026-03-10 00:47:31.319185 | orchestrator | 2026-03-10 00:47:31 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:47:34.437464 | orchestrator | 2026-03-10 00:47:34 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:47:34.440283 | orchestrator | 2026-03-10 00:47:34 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:47:34.441382 | orchestrator | 2026-03-10 00:47:34 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:47:34.444480 | orchestrator | 2026-03-10 00:47:34 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:47:34.450290 | orchestrator | 2026-03-10 00:47:34 | INFO  | Task 4e016ceb-7ff7-4b93-86a4-a232bd3e1e9d is in state STARTED 2026-03-10 00:47:34.450355 | orchestrator | 2026-03-10 00:47:34 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:47:37.507983 | orchestrator | 2026-03-10 00:47:37 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:47:37.510581 | orchestrator | 2026-03-10 00:47:37 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:47:37.513158 | orchestrator | 2026-03-10 00:47:37 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:47:37.517935 | orchestrator | 2026-03-10 00:47:37 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:47:37.521301 | orchestrator | 2026-03-10 00:47:37 | INFO  | Task 4e016ceb-7ff7-4b93-86a4-a232bd3e1e9d is in state STARTED 2026-03-10 00:47:37.521341 | orchestrator | 2026-03-10 00:47:37 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:47:40.560012 | orchestrator | 2026-03-10 00:47:40 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:47:40.563495 | orchestrator | 2026-03-10 00:47:40 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:47:40.568344 | orchestrator | 2026-03-10 00:47:40 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:47:40.570615 | orchestrator | 2026-03-10 00:47:40 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:47:40.571743 | orchestrator | 2026-03-10 00:47:40 | INFO  | Task 4e016ceb-7ff7-4b93-86a4-a232bd3e1e9d is in state STARTED 2026-03-10 00:47:40.571772 | orchestrator | 2026-03-10 00:47:40 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:47:43.636661 | orchestrator | 2026-03-10 00:47:43 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:47:43.636755 | orchestrator | 2026-03-10 00:47:43 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:47:43.636762 | orchestrator | 2026-03-10 00:47:43 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:47:43.636767 | orchestrator | 2026-03-10 00:47:43 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:47:43.639633 | orchestrator | 2026-03-10 00:47:43 | INFO  | Task 4e016ceb-7ff7-4b93-86a4-a232bd3e1e9d is in state STARTED 2026-03-10 00:47:43.640866 | orchestrator | 2026-03-10 00:47:43 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:47:46.718968 | orchestrator | 2026-03-10 00:47:46 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:47:46.720284 | orchestrator | 2026-03-10 00:47:46 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:47:46.722558 | orchestrator | 2026-03-10 00:47:46 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:47:46.724560 | orchestrator | 2026-03-10 00:47:46 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:47:46.726053 | orchestrator | 2026-03-10 00:47:46 | INFO  | Task 4e016ceb-7ff7-4b93-86a4-a232bd3e1e9d is in state STARTED 2026-03-10 00:47:46.726594 | orchestrator | 2026-03-10 00:47:46 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:47:49.783719 | orchestrator | 2026-03-10 00:47:49 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:47:49.791316 | orchestrator | 2026-03-10 00:47:49 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:47:49.791415 | orchestrator | 2026-03-10 00:47:49 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:47:49.795850 | orchestrator | 2026-03-10 00:47:49 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:47:49.798279 | orchestrator | 2026-03-10 00:47:49 | INFO  | Task 4e016ceb-7ff7-4b93-86a4-a232bd3e1e9d is in state STARTED 2026-03-10 00:47:49.798361 | orchestrator | 2026-03-10 00:47:49 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:47:52.853327 | orchestrator | 2026-03-10 00:47:52 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:47:52.853961 | orchestrator | 2026-03-10 00:47:52 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:47:52.858995 | orchestrator | 2026-03-10 00:47:52 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:47:52.861159 | orchestrator | 2026-03-10 00:47:52 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:47:52.864700 | orchestrator | 2026-03-10 00:47:52 | INFO  | Task 4e016ceb-7ff7-4b93-86a4-a232bd3e1e9d is in state STARTED 2026-03-10 00:47:52.864776 | orchestrator | 2026-03-10 00:47:52 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:47:55.926872 | orchestrator | 2026-03-10 00:47:55 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:47:55.931168 | orchestrator | 2026-03-10 00:47:55 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:47:55.936477 | orchestrator | 2026-03-10 00:47:55 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:47:55.939677 | orchestrator | 2026-03-10 00:47:55 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:47:55.940390 | orchestrator | 2026-03-10 00:47:55 | INFO  | Task 4e016ceb-7ff7-4b93-86a4-a232bd3e1e9d is in state STARTED 2026-03-10 00:47:55.940423 | orchestrator | 2026-03-10 00:47:55 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:47:59.243674 | orchestrator | 2026-03-10 00:47:59 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:47:59.244118 | orchestrator | 2026-03-10 00:47:59 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:47:59.245037 | orchestrator | 2026-03-10 00:47:59 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:47:59.246179 | orchestrator | 2026-03-10 00:47:59 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:47:59.246932 | orchestrator | 2026-03-10 00:47:59 | INFO  | Task 4e016ceb-7ff7-4b93-86a4-a232bd3e1e9d is in state STARTED 2026-03-10 00:47:59.246957 | orchestrator | 2026-03-10 00:47:59 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:48:02.327031 | orchestrator | 2026-03-10 00:48:02 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:48:02.327136 | orchestrator | 2026-03-10 00:48:02 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:48:02.327182 | orchestrator | 2026-03-10 00:48:02 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:48:02.327194 | orchestrator | 2026-03-10 00:48:02 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:48:02.327281 | orchestrator | 2026-03-10 00:48:02 | INFO  | Task 4e016ceb-7ff7-4b93-86a4-a232bd3e1e9d is in state STARTED 2026-03-10 00:48:02.327296 | orchestrator | 2026-03-10 00:48:02 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:48:05.384006 | orchestrator | 2026-03-10 00:48:05 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:48:05.384818 | orchestrator | 2026-03-10 00:48:05 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:48:05.385592 | orchestrator | 2026-03-10 00:48:05 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:48:05.386431 | orchestrator | 2026-03-10 00:48:05 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:48:05.387638 | orchestrator | 2026-03-10 00:48:05 | INFO  | Task 4e016ceb-7ff7-4b93-86a4-a232bd3e1e9d is in state STARTED 2026-03-10 00:48:05.387683 | orchestrator | 2026-03-10 00:48:05 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:48:08.440466 | orchestrator | 2026-03-10 00:48:08 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:48:08.440580 | orchestrator | 2026-03-10 00:48:08 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:48:08.447845 | orchestrator | 2026-03-10 00:48:08 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:48:08.447968 | orchestrator | 2026-03-10 00:48:08 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:48:08.447980 | orchestrator | 2026-03-10 00:48:08 | INFO  | Task 4e016ceb-7ff7-4b93-86a4-a232bd3e1e9d is in state STARTED 2026-03-10 00:48:08.447989 | orchestrator | 2026-03-10 00:48:08 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:48:11.476813 | orchestrator | 2026-03-10 00:48:11 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:48:11.477050 | orchestrator | 2026-03-10 00:48:11 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:48:11.477947 | orchestrator | 2026-03-10 00:48:11 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:48:11.480275 | orchestrator | 2026-03-10 00:48:11 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:48:11.480667 | orchestrator | 2026-03-10 00:48:11 | INFO  | Task 4e016ceb-7ff7-4b93-86a4-a232bd3e1e9d is in state SUCCESS 2026-03-10 00:48:11.481472 | orchestrator | 2026-03-10 00:48:11.481545 | orchestrator | 2026-03-10 00:48:11.481564 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-03-10 00:48:11.481580 | orchestrator | 2026-03-10 00:48:11.481594 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-03-10 00:48:11.481607 | orchestrator | Tuesday 10 March 2026 00:46:35 +0000 (0:00:00.760) 0:00:00.760 ********* 2026-03-10 00:48:11.481620 | orchestrator | ok: [testbed-manager] => { 2026-03-10 00:48:11.481637 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-03-10 00:48:11.481651 | orchestrator | } 2026-03-10 00:48:11.481664 | orchestrator | 2026-03-10 00:48:11.481678 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-03-10 00:48:11.481691 | orchestrator | Tuesday 10 March 2026 00:46:36 +0000 (0:00:00.308) 0:00:01.068 ********* 2026-03-10 00:48:11.481704 | orchestrator | ok: [testbed-manager] 2026-03-10 00:48:11.481746 | orchestrator | 2026-03-10 00:48:11.481759 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-03-10 00:48:11.481772 | orchestrator | Tuesday 10 March 2026 00:46:38 +0000 (0:00:02.068) 0:00:03.137 ********* 2026-03-10 00:48:11.481785 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-03-10 00:48:11.481798 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-03-10 00:48:11.481813 | orchestrator | 2026-03-10 00:48:11.481826 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-03-10 00:48:11.481839 | orchestrator | Tuesday 10 March 2026 00:46:39 +0000 (0:00:01.424) 0:00:04.561 ********* 2026-03-10 00:48:11.481853 | orchestrator | changed: [testbed-manager] 2026-03-10 00:48:11.481868 | orchestrator | 2026-03-10 00:48:11.481882 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-03-10 00:48:11.481896 | orchestrator | Tuesday 10 March 2026 00:46:43 +0000 (0:00:04.199) 0:00:08.762 ********* 2026-03-10 00:48:11.481909 | orchestrator | changed: [testbed-manager] 2026-03-10 00:48:11.481922 | orchestrator | 2026-03-10 00:48:11.481934 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-03-10 00:48:11.481947 | orchestrator | Tuesday 10 March 2026 00:46:46 +0000 (0:00:02.716) 0:00:11.478 ********* 2026-03-10 00:48:11.481961 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-03-10 00:48:11.481975 | orchestrator | ok: [testbed-manager] 2026-03-10 00:48:11.481988 | orchestrator | 2026-03-10 00:48:11.482002 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-03-10 00:48:11.482068 | orchestrator | Tuesday 10 March 2026 00:47:14 +0000 (0:00:28.026) 0:00:39.505 ********* 2026-03-10 00:48:11.482089 | orchestrator | changed: [testbed-manager] 2026-03-10 00:48:11.482106 | orchestrator | 2026-03-10 00:48:11.482122 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:48:11.482136 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:48:11.482152 | orchestrator | 2026-03-10 00:48:11.482166 | orchestrator | 2026-03-10 00:48:11.482183 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:48:11.482266 | orchestrator | Tuesday 10 March 2026 00:47:18 +0000 (0:00:03.974) 0:00:43.480 ********* 2026-03-10 00:48:11.482287 | orchestrator | =============================================================================== 2026-03-10 00:48:11.482304 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 28.03s 2026-03-10 00:48:11.482320 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 4.20s 2026-03-10 00:48:11.482336 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.97s 2026-03-10 00:48:11.482351 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.72s 2026-03-10 00:48:11.482366 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.07s 2026-03-10 00:48:11.482380 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.42s 2026-03-10 00:48:11.482395 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.31s 2026-03-10 00:48:11.482410 | orchestrator | 2026-03-10 00:48:11.482425 | orchestrator | 2026-03-10 00:48:11.482457 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-10 00:48:11.482472 | orchestrator | 2026-03-10 00:48:11.482488 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-10 00:48:11.482503 | orchestrator | Tuesday 10 March 2026 00:46:37 +0000 (0:00:00.746) 0:00:00.746 ********* 2026-03-10 00:48:11.482519 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-10 00:48:11.482535 | orchestrator | 2026-03-10 00:48:11.482550 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-10 00:48:11.482565 | orchestrator | Tuesday 10 March 2026 00:46:38 +0000 (0:00:00.247) 0:00:00.994 ********* 2026-03-10 00:48:11.482595 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-10 00:48:11.482610 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-10 00:48:11.482625 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-10 00:48:11.482639 | orchestrator | 2026-03-10 00:48:11.482651 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-10 00:48:11.482665 | orchestrator | Tuesday 10 March 2026 00:46:39 +0000 (0:00:01.012) 0:00:02.006 ********* 2026-03-10 00:48:11.482677 | orchestrator | changed: [testbed-manager] 2026-03-10 00:48:11.482689 | orchestrator | 2026-03-10 00:48:11.482702 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-10 00:48:11.482714 | orchestrator | Tuesday 10 March 2026 00:46:41 +0000 (0:00:02.440) 0:00:04.446 ********* 2026-03-10 00:48:11.482747 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-10 00:48:11.482760 | orchestrator | ok: [testbed-manager] 2026-03-10 00:48:11.482772 | orchestrator | 2026-03-10 00:48:11.482783 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-10 00:48:11.482795 | orchestrator | Tuesday 10 March 2026 00:47:19 +0000 (0:00:37.851) 0:00:42.298 ********* 2026-03-10 00:48:11.482807 | orchestrator | changed: [testbed-manager] 2026-03-10 00:48:11.482822 | orchestrator | 2026-03-10 00:48:11.482834 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-10 00:48:11.482846 | orchestrator | Tuesday 10 March 2026 00:47:21 +0000 (0:00:02.004) 0:00:44.302 ********* 2026-03-10 00:48:11.482859 | orchestrator | ok: [testbed-manager] 2026-03-10 00:48:11.482871 | orchestrator | 2026-03-10 00:48:11.482883 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-10 00:48:11.482896 | orchestrator | Tuesday 10 March 2026 00:47:23 +0000 (0:00:01.869) 0:00:46.172 ********* 2026-03-10 00:48:11.482908 | orchestrator | changed: [testbed-manager] 2026-03-10 00:48:11.482920 | orchestrator | 2026-03-10 00:48:11.482932 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-10 00:48:11.482945 | orchestrator | Tuesday 10 March 2026 00:47:27 +0000 (0:00:03.637) 0:00:49.809 ********* 2026-03-10 00:48:11.482957 | orchestrator | changed: [testbed-manager] 2026-03-10 00:48:11.482968 | orchestrator | 2026-03-10 00:48:11.482979 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-10 00:48:11.482992 | orchestrator | Tuesday 10 March 2026 00:47:28 +0000 (0:00:01.360) 0:00:51.170 ********* 2026-03-10 00:48:11.483005 | orchestrator | changed: [testbed-manager] 2026-03-10 00:48:11.483017 | orchestrator | 2026-03-10 00:48:11.483028 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-10 00:48:11.483040 | orchestrator | Tuesday 10 March 2026 00:47:30 +0000 (0:00:01.810) 0:00:52.980 ********* 2026-03-10 00:48:11.483052 | orchestrator | ok: [testbed-manager] 2026-03-10 00:48:11.483064 | orchestrator | 2026-03-10 00:48:11.483076 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:48:11.483089 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:48:11.483102 | orchestrator | 2026-03-10 00:48:11.483114 | orchestrator | 2026-03-10 00:48:11.483127 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:48:11.483139 | orchestrator | Tuesday 10 March 2026 00:47:30 +0000 (0:00:00.546) 0:00:53.526 ********* 2026-03-10 00:48:11.483151 | orchestrator | =============================================================================== 2026-03-10 00:48:11.483163 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 37.85s 2026-03-10 00:48:11.483175 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 3.64s 2026-03-10 00:48:11.483188 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.44s 2026-03-10 00:48:11.483234 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.00s 2026-03-10 00:48:11.483249 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.87s 2026-03-10 00:48:11.483262 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.81s 2026-03-10 00:48:11.483274 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.36s 2026-03-10 00:48:11.483287 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.01s 2026-03-10 00:48:11.483298 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.55s 2026-03-10 00:48:11.483311 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.25s 2026-03-10 00:48:11.483323 | orchestrator | 2026-03-10 00:48:11.483336 | orchestrator | 2026-03-10 00:48:11.483350 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-03-10 00:48:11.483363 | orchestrator | 2026-03-10 00:48:11.483375 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-03-10 00:48:11.483388 | orchestrator | Tuesday 10 March 2026 00:46:58 +0000 (0:00:00.351) 0:00:00.351 ********* 2026-03-10 00:48:11.483400 | orchestrator | ok: [testbed-manager] 2026-03-10 00:48:11.483413 | orchestrator | 2026-03-10 00:48:11.483426 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-03-10 00:48:11.483445 | orchestrator | Tuesday 10 March 2026 00:47:00 +0000 (0:00:01.498) 0:00:01.850 ********* 2026-03-10 00:48:11.483458 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-03-10 00:48:11.483470 | orchestrator | 2026-03-10 00:48:11.483482 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-03-10 00:48:11.483495 | orchestrator | Tuesday 10 March 2026 00:47:00 +0000 (0:00:00.826) 0:00:02.676 ********* 2026-03-10 00:48:11.483508 | orchestrator | changed: [testbed-manager] 2026-03-10 00:48:11.483520 | orchestrator | 2026-03-10 00:48:11.483533 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-03-10 00:48:11.483545 | orchestrator | Tuesday 10 March 2026 00:47:02 +0000 (0:00:01.541) 0:00:04.218 ********* 2026-03-10 00:48:11.483558 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-03-10 00:48:11.483571 | orchestrator | ok: [testbed-manager] 2026-03-10 00:48:11.483583 | orchestrator | 2026-03-10 00:48:11.483597 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-03-10 00:48:11.483609 | orchestrator | Tuesday 10 March 2026 00:47:59 +0000 (0:00:57.413) 0:01:01.632 ********* 2026-03-10 00:48:11.483622 | orchestrator | changed: [testbed-manager] 2026-03-10 00:48:11.483634 | orchestrator | 2026-03-10 00:48:11.483647 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:48:11.483660 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:48:11.483673 | orchestrator | 2026-03-10 00:48:11.483685 | orchestrator | 2026-03-10 00:48:11.483698 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:48:11.483723 | orchestrator | Tuesday 10 March 2026 00:48:08 +0000 (0:00:08.099) 0:01:09.731 ********* 2026-03-10 00:48:11.483737 | orchestrator | =============================================================================== 2026-03-10 00:48:11.483751 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 57.41s 2026-03-10 00:48:11.483763 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 8.10s 2026-03-10 00:48:11.483776 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.54s 2026-03-10 00:48:11.483788 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.50s 2026-03-10 00:48:11.483801 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.83s 2026-03-10 00:48:11.483814 | orchestrator | 2026-03-10 00:48:11 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:48:14.802010 | orchestrator | 2026-03-10 00:48:14 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:48:14.802935 | orchestrator | 2026-03-10 00:48:14 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:48:14.802962 | orchestrator | 2026-03-10 00:48:14 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:48:14.802966 | orchestrator | 2026-03-10 00:48:14 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:48:14.802971 | orchestrator | 2026-03-10 00:48:14 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:48:17.685707 | orchestrator | 2026-03-10 00:48:17 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:48:17.690563 | orchestrator | 2026-03-10 00:48:17 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:48:17.691716 | orchestrator | 2026-03-10 00:48:17 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:48:17.693397 | orchestrator | 2026-03-10 00:48:17 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:48:17.693434 | orchestrator | 2026-03-10 00:48:17 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:48:20.743160 | orchestrator | 2026-03-10 00:48:20 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:48:20.744557 | orchestrator | 2026-03-10 00:48:20 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:48:20.746334 | orchestrator | 2026-03-10 00:48:20 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:48:20.749940 | orchestrator | 2026-03-10 00:48:20 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:48:20.749999 | orchestrator | 2026-03-10 00:48:20 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:48:23.788511 | orchestrator | 2026-03-10 00:48:23 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:48:23.799436 | orchestrator | 2026-03-10 00:48:23 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state STARTED 2026-03-10 00:48:23.799528 | orchestrator | 2026-03-10 00:48:23 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:48:23.799541 | orchestrator | 2026-03-10 00:48:23 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:48:23.799553 | orchestrator | 2026-03-10 00:48:23 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:48:26.841407 | orchestrator | 2026-03-10 00:48:26 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:48:26.843057 | orchestrator | 2026-03-10 00:48:26.843125 | orchestrator | 2026-03-10 00:48:26.843137 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 00:48:26.843147 | orchestrator | 2026-03-10 00:48:26.843157 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 00:48:26.843166 | orchestrator | Tuesday 10 March 2026 00:46:36 +0000 (0:00:00.492) 0:00:00.492 ********* 2026-03-10 00:48:26.843175 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-10 00:48:26.843202 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-10 00:48:26.843211 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-10 00:48:26.843220 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-10 00:48:26.843229 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-10 00:48:26.843237 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-10 00:48:26.843246 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-10 00:48:26.843255 | orchestrator | 2026-03-10 00:48:26.843283 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-10 00:48:26.843292 | orchestrator | 2026-03-10 00:48:26.843300 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-10 00:48:26.843309 | orchestrator | Tuesday 10 March 2026 00:46:38 +0000 (0:00:01.882) 0:00:02.375 ********* 2026-03-10 00:48:26.843330 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:48:26.843345 | orchestrator | 2026-03-10 00:48:26.843353 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-10 00:48:26.843361 | orchestrator | Tuesday 10 March 2026 00:46:41 +0000 (0:00:02.833) 0:00:05.209 ********* 2026-03-10 00:48:26.843370 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:48:26.843380 | orchestrator | ok: [testbed-manager] 2026-03-10 00:48:26.843389 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:48:26.843397 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:48:26.843405 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:48:26.843414 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:48:26.843422 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:48:26.843430 | orchestrator | 2026-03-10 00:48:26.843439 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-10 00:48:26.843448 | orchestrator | Tuesday 10 March 2026 00:46:42 +0000 (0:00:01.876) 0:00:07.085 ********* 2026-03-10 00:48:26.843456 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:48:26.843464 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:48:26.843472 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:48:26.843480 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:48:26.843489 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:48:26.843497 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:48:26.843506 | orchestrator | ok: [testbed-manager] 2026-03-10 00:48:26.843514 | orchestrator | 2026-03-10 00:48:26.843522 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-10 00:48:26.843530 | orchestrator | Tuesday 10 March 2026 00:46:46 +0000 (0:00:03.938) 0:00:11.024 ********* 2026-03-10 00:48:26.843538 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:48:26.843546 | orchestrator | changed: [testbed-manager] 2026-03-10 00:48:26.843555 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:48:26.843563 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:48:26.843571 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:48:26.843579 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:48:26.843588 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:48:26.843597 | orchestrator | 2026-03-10 00:48:26.843605 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-10 00:48:26.843614 | orchestrator | Tuesday 10 March 2026 00:46:50 +0000 (0:00:03.652) 0:00:14.676 ********* 2026-03-10 00:48:26.843623 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:48:26.843632 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:48:26.843641 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:48:26.843650 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:48:26.843659 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:48:26.843668 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:48:26.843678 | orchestrator | changed: [testbed-manager] 2026-03-10 00:48:26.843686 | orchestrator | 2026-03-10 00:48:26.843696 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-10 00:48:26.843705 | orchestrator | Tuesday 10 March 2026 00:47:04 +0000 (0:00:14.391) 0:00:29.068 ********* 2026-03-10 00:48:26.843714 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:48:26.843722 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:48:26.843731 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:48:26.843740 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:48:26.843748 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:48:26.843757 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:48:26.843765 | orchestrator | changed: [testbed-manager] 2026-03-10 00:48:26.843782 | orchestrator | 2026-03-10 00:48:26.843790 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-10 00:48:26.843799 | orchestrator | Tuesday 10 March 2026 00:47:52 +0000 (0:00:47.376) 0:01:16.444 ********* 2026-03-10 00:48:26.843808 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:48:26.843819 | orchestrator | 2026-03-10 00:48:26.843828 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-10 00:48:26.843836 | orchestrator | Tuesday 10 March 2026 00:47:54 +0000 (0:00:01.985) 0:01:18.429 ********* 2026-03-10 00:48:26.843845 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-10 00:48:26.843854 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-10 00:48:26.843868 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-10 00:48:26.843877 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-10 00:48:26.843900 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-10 00:48:26.843910 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-10 00:48:26.843918 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-10 00:48:26.843927 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-10 00:48:26.843936 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-10 00:48:26.843945 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-10 00:48:26.843954 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-10 00:48:26.843963 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-10 00:48:26.843972 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-10 00:48:26.843980 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-10 00:48:26.843989 | orchestrator | 2026-03-10 00:48:26.843997 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-10 00:48:26.844007 | orchestrator | Tuesday 10 March 2026 00:48:01 +0000 (0:00:07.239) 0:01:25.669 ********* 2026-03-10 00:48:26.844016 | orchestrator | ok: [testbed-manager] 2026-03-10 00:48:26.844025 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:48:26.844034 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:48:26.844042 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:48:26.844051 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:48:26.844060 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:48:26.844068 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:48:26.844077 | orchestrator | 2026-03-10 00:48:26.844085 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-10 00:48:26.844094 | orchestrator | Tuesday 10 March 2026 00:48:03 +0000 (0:00:01.536) 0:01:27.205 ********* 2026-03-10 00:48:26.844103 | orchestrator | changed: [testbed-manager] 2026-03-10 00:48:26.844111 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:48:26.844121 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:48:26.844129 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:48:26.844138 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:48:26.844147 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:48:26.844155 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:48:26.844163 | orchestrator | 2026-03-10 00:48:26.844172 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-10 00:48:26.844181 | orchestrator | Tuesday 10 March 2026 00:48:04 +0000 (0:00:01.761) 0:01:28.967 ********* 2026-03-10 00:48:26.844217 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:48:26.844226 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:48:26.844235 | orchestrator | ok: [testbed-manager] 2026-03-10 00:48:26.844245 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:48:26.844254 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:48:26.844263 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:48:26.844281 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:48:26.844290 | orchestrator | 2026-03-10 00:48:26.844300 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-10 00:48:26.844309 | orchestrator | Tuesday 10 March 2026 00:48:06 +0000 (0:00:01.687) 0:01:30.654 ********* 2026-03-10 00:48:26.844318 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:48:26.844328 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:48:26.844337 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:48:26.844346 | orchestrator | ok: [testbed-manager] 2026-03-10 00:48:26.844356 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:48:26.844365 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:48:26.844374 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:48:26.844384 | orchestrator | 2026-03-10 00:48:26.844393 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-10 00:48:26.844402 | orchestrator | Tuesday 10 March 2026 00:48:08 +0000 (0:00:02.360) 0:01:33.014 ********* 2026-03-10 00:48:26.844413 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-10 00:48:26.844425 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:48:26.844436 | orchestrator | 2026-03-10 00:48:26.844445 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-10 00:48:26.844455 | orchestrator | Tuesday 10 March 2026 00:48:10 +0000 (0:00:01.955) 0:01:34.970 ********* 2026-03-10 00:48:26.844464 | orchestrator | changed: [testbed-manager] 2026-03-10 00:48:26.844473 | orchestrator | 2026-03-10 00:48:26.844482 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-10 00:48:26.844492 | orchestrator | Tuesday 10 March 2026 00:48:13 +0000 (0:00:02.503) 0:01:37.473 ********* 2026-03-10 00:48:26.844502 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:48:26.844511 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:48:26.844521 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:48:26.844531 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:48:26.844540 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:48:26.844549 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:48:26.844558 | orchestrator | changed: [testbed-manager] 2026-03-10 00:48:26.844567 | orchestrator | 2026-03-10 00:48:26.844577 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:48:26.844586 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:48:26.844598 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:48:26.844608 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:48:26.844620 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:48:26.844634 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:48:26.844640 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:48:26.844645 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:48:26.844651 | orchestrator | 2026-03-10 00:48:26.844656 | orchestrator | 2026-03-10 00:48:26.844662 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:48:26.844667 | orchestrator | Tuesday 10 March 2026 00:48:25 +0000 (0:00:11.889) 0:01:49.362 ********* 2026-03-10 00:48:26.844679 | orchestrator | =============================================================================== 2026-03-10 00:48:26.844685 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 47.38s 2026-03-10 00:48:26.844692 | orchestrator | osism.services.netdata : Add repository -------------------------------- 14.39s 2026-03-10 00:48:26.844697 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.89s 2026-03-10 00:48:26.844703 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 7.24s 2026-03-10 00:48:26.844814 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.94s 2026-03-10 00:48:26.844820 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.65s 2026-03-10 00:48:26.844825 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.83s 2026-03-10 00:48:26.844830 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.50s 2026-03-10 00:48:26.844836 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.36s 2026-03-10 00:48:26.844841 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.99s 2026-03-10 00:48:26.844846 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.96s 2026-03-10 00:48:26.844852 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.88s 2026-03-10 00:48:26.844857 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.88s 2026-03-10 00:48:26.844863 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.76s 2026-03-10 00:48:26.844868 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.69s 2026-03-10 00:48:26.844874 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.54s 2026-03-10 00:48:26.844880 | orchestrator | 2026-03-10 00:48:26 | INFO  | Task a794f113-00b5-446c-928e-2324083b1505 is in state SUCCESS 2026-03-10 00:48:26.844890 | orchestrator | 2026-03-10 00:48:26 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:48:26.848872 | orchestrator | 2026-03-10 00:48:26 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:48:26.848924 | orchestrator | 2026-03-10 00:48:26 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:48:29.910232 | orchestrator | 2026-03-10 00:48:29 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:48:29.911916 | orchestrator | 2026-03-10 00:48:29 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:48:29.914722 | orchestrator | 2026-03-10 00:48:29 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:48:29.915072 | orchestrator | 2026-03-10 00:48:29 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:48:32.968871 | orchestrator | 2026-03-10 00:48:32 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:48:32.970821 | orchestrator | 2026-03-10 00:48:32 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:48:32.971845 | orchestrator | 2026-03-10 00:48:32 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:48:32.971870 | orchestrator | 2026-03-10 00:48:32 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:48:36.013561 | orchestrator | 2026-03-10 00:48:36 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:48:36.014881 | orchestrator | 2026-03-10 00:48:36 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:48:36.015941 | orchestrator | 2026-03-10 00:48:36 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:48:36.015972 | orchestrator | 2026-03-10 00:48:36 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:48:39.055677 | orchestrator | 2026-03-10 00:48:39 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:48:39.058305 | orchestrator | 2026-03-10 00:48:39 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:48:39.058842 | orchestrator | 2026-03-10 00:48:39 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:48:39.058867 | orchestrator | 2026-03-10 00:48:39 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:48:42.099388 | orchestrator | 2026-03-10 00:48:42 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:48:42.099825 | orchestrator | 2026-03-10 00:48:42 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:48:42.101145 | orchestrator | 2026-03-10 00:48:42 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:48:42.101276 | orchestrator | 2026-03-10 00:48:42 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:48:45.134903 | orchestrator | 2026-03-10 00:48:45 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:48:45.135592 | orchestrator | 2026-03-10 00:48:45 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:48:45.138475 | orchestrator | 2026-03-10 00:48:45 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:48:45.138534 | orchestrator | 2026-03-10 00:48:45 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:48:48.198341 | orchestrator | 2026-03-10 00:48:48 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:48:48.198774 | orchestrator | 2026-03-10 00:48:48 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:48:48.200246 | orchestrator | 2026-03-10 00:48:48 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:48:48.200305 | orchestrator | 2026-03-10 00:48:48 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:48:51.253237 | orchestrator | 2026-03-10 00:48:51 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:48:51.253416 | orchestrator | 2026-03-10 00:48:51 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:48:51.254152 | orchestrator | 2026-03-10 00:48:51 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:48:51.254251 | orchestrator | 2026-03-10 00:48:51 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:48:54.288854 | orchestrator | 2026-03-10 00:48:54 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state STARTED 2026-03-10 00:48:54.289977 | orchestrator | 2026-03-10 00:48:54 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:48:54.291775 | orchestrator | 2026-03-10 00:48:54 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:48:54.291838 | orchestrator | 2026-03-10 00:48:54 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:48:57.352981 | orchestrator | 2026-03-10 00:48:57.353063 | orchestrator | 2026-03-10 00:48:57.353073 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-10 00:48:57.353082 | orchestrator | 2026-03-10 00:48:57.353091 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-10 00:48:57.353099 | orchestrator | Tuesday 10 March 2026 00:46:28 +0000 (0:00:00.283) 0:00:00.284 ********* 2026-03-10 00:48:57.353108 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:48:57.353133 | orchestrator | 2026-03-10 00:48:57.353142 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-10 00:48:57.353150 | orchestrator | Tuesday 10 March 2026 00:46:29 +0000 (0:00:01.082) 0:00:01.366 ********* 2026-03-10 00:48:57.353178 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-10 00:48:57.353186 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-10 00:48:57.353193 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-10 00:48:57.353202 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-10 00:48:57.353210 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-10 00:48:57.353217 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-10 00:48:57.353225 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-10 00:48:57.353232 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-10 00:48:57.353260 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-10 00:48:57.353269 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-10 00:48:57.353276 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-10 00:48:57.353284 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-10 00:48:57.353336 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-10 00:48:57.353349 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-10 00:48:57.353356 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-10 00:48:57.353363 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-10 00:48:57.353371 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-10 00:48:57.353378 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-10 00:48:57.353386 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-10 00:48:57.353393 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-10 00:48:57.353400 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-10 00:48:57.353407 | orchestrator | 2026-03-10 00:48:57.353414 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-10 00:48:57.353422 | orchestrator | Tuesday 10 March 2026 00:46:32 +0000 (0:00:03.586) 0:00:04.953 ********* 2026-03-10 00:48:57.353429 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:48:57.353438 | orchestrator | 2026-03-10 00:48:57.353445 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-10 00:48:57.353452 | orchestrator | Tuesday 10 March 2026 00:46:34 +0000 (0:00:01.272) 0:00:06.225 ********* 2026-03-10 00:48:57.353464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:48:57.353475 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:48:57.353520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:48:57.353530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:48:57.353537 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:48:57.353549 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:48:57.353557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.353565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.353578 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.353606 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:48:57.353615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.353629 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.353637 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.353650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.353665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.353678 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.353686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.353714 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.353723 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.353730 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.353741 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.353749 | orchestrator | 2026-03-10 00:48:57.353757 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-10 00:48:57.353764 | orchestrator | Tuesday 10 March 2026 00:46:38 +0000 (0:00:04.355) 0:00:10.581 ********* 2026-03-10 00:48:57.353772 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-10 00:48:57.353780 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:48:57.353792 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:48:57.353799 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:48:57.353828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-10 00:48:57.353838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:48:57.353845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:48:57.353856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-10 00:48:57.353864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:48:57.353872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:48:57.353884 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:48:57.353892 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:48:57.353899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-10 00:48:57.353907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:48:57.353938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:48:57.353947 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-10 00:48:57.353955 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:48:57.353962 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:48:57.353973 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:48:57.353981 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:48:57.353990 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-10 00:48:57.354003 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:48:57.354078 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:48:57.354091 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:48:57.354186 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-10 00:48:57.354199 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:48:57.354207 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:48:57.354215 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:48:57.354222 | orchestrator | 2026-03-10 00:48:57.354229 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-10 00:48:57.354237 | orchestrator | Tuesday 10 March 2026 00:46:40 +0000 (0:00:01.755) 0:00:12.337 ********* 2026-03-10 00:48:57.354244 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-10 00:48:57.354252 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:48:57.354267 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:48:57.354274 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:48:57.354288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-10 00:48:57.354303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:48:57.354311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:48:57.354319 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:48:57.354326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-10 00:48:57.354338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:48:57.354351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:48:57.354359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-10 00:48:57.354366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:48:57.354374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:48:57.354381 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:48:57.354400 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'C2026-03-10 00:48:57 | INFO  | Task c7dcfb24-f69d-4bef-a76f-88af408d5275 is in state SUCCESS 2026-03-10 00:48:57.354409 | orchestrator | OPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-10 00:48:57.354420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:48:57.354428 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:48:57.354440 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:48:57.354447 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:48:57.354458 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-10 00:48:57.354466 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-10 00:48:57.354474 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:48:57.354481 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:48:57.354493 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:48:57.354501 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:48:57.354508 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:48:57.354516 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:48:57.354523 | orchestrator | 2026-03-10 00:48:57.354530 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-10 00:48:57.354538 | orchestrator | Tuesday 10 March 2026 00:46:43 +0000 (0:00:03.055) 0:00:15.392 ********* 2026-03-10 00:48:57.354545 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:48:57.354552 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:48:57.354559 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:48:57.354571 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:48:57.354578 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:48:57.354585 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:48:57.354592 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:48:57.354600 | orchestrator | 2026-03-10 00:48:57.354607 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-10 00:48:57.354614 | orchestrator | Tuesday 10 March 2026 00:46:44 +0000 (0:00:01.251) 0:00:16.644 ********* 2026-03-10 00:48:57.354621 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:48:57.354628 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:48:57.354635 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:48:57.354643 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:48:57.354650 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:48:57.354657 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:48:57.354664 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:48:57.354671 | orchestrator | 2026-03-10 00:48:57.354685 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-10 00:48:57.354692 | orchestrator | Tuesday 10 March 2026 00:46:46 +0000 (0:00:02.066) 0:00:18.710 ********* 2026-03-10 00:48:57.354700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:48:57.354708 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:48:57.354715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:48:57.354723 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:48:57.354735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.354748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:48:57.354755 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:48:57.354766 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.354774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.354782 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:48:57.354789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.354801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.354814 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.354823 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.354834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.354843 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.354852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.354860 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.354869 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.354885 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.354900 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.354907 | orchestrator | 2026-03-10 00:48:57.354915 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-10 00:48:57.354922 | orchestrator | Tuesday 10 March 2026 00:46:57 +0000 (0:00:10.779) 0:00:29.490 ********* 2026-03-10 00:48:57.354930 | orchestrator | [WARNING]: Skipped 2026-03-10 00:48:57.354937 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-10 00:48:57.354945 | orchestrator | to this access issue: 2026-03-10 00:48:57.354952 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-10 00:48:57.354959 | orchestrator | directory 2026-03-10 00:48:57.354967 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-10 00:48:57.354974 | orchestrator | 2026-03-10 00:48:57.354981 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-10 00:48:57.354988 | orchestrator | Tuesday 10 March 2026 00:46:59 +0000 (0:00:02.207) 0:00:31.698 ********* 2026-03-10 00:48:57.354996 | orchestrator | [WARNING]: Skipped 2026-03-10 00:48:57.355003 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-10 00:48:57.355010 | orchestrator | to this access issue: 2026-03-10 00:48:57.355017 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-10 00:48:57.355024 | orchestrator | directory 2026-03-10 00:48:57.355032 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-10 00:48:57.355039 | orchestrator | 2026-03-10 00:48:57.355049 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-10 00:48:57.355057 | orchestrator | Tuesday 10 March 2026 00:47:00 +0000 (0:00:01.256) 0:00:32.955 ********* 2026-03-10 00:48:57.355064 | orchestrator | [WARNING]: Skipped 2026-03-10 00:48:57.355071 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-10 00:48:57.355078 | orchestrator | to this access issue: 2026-03-10 00:48:57.355085 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-10 00:48:57.355093 | orchestrator | directory 2026-03-10 00:48:57.355100 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-10 00:48:57.355107 | orchestrator | 2026-03-10 00:48:57.355182 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-10 00:48:57.355194 | orchestrator | Tuesday 10 March 2026 00:47:02 +0000 (0:00:01.360) 0:00:34.315 ********* 2026-03-10 00:48:57.355201 | orchestrator | [WARNING]: Skipped 2026-03-10 00:48:57.355208 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-10 00:48:57.355216 | orchestrator | to this access issue: 2026-03-10 00:48:57.355223 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-10 00:48:57.355230 | orchestrator | directory 2026-03-10 00:48:57.355237 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-10 00:48:57.355244 | orchestrator | 2026-03-10 00:48:57.355251 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-10 00:48:57.355259 | orchestrator | Tuesday 10 March 2026 00:47:03 +0000 (0:00:01.102) 0:00:35.418 ********* 2026-03-10 00:48:57.355266 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:48:57.355273 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:48:57.355286 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:48:57.355293 | orchestrator | changed: [testbed-manager] 2026-03-10 00:48:57.355301 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:48:57.355308 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:48:57.355315 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:48:57.355322 | orchestrator | 2026-03-10 00:48:57.355329 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-10 00:48:57.355337 | orchestrator | Tuesday 10 March 2026 00:47:09 +0000 (0:00:06.514) 0:00:41.932 ********* 2026-03-10 00:48:57.355344 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-10 00:48:57.355352 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-10 00:48:57.355359 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-10 00:48:57.355366 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-10 00:48:57.355374 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-10 00:48:57.355381 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-10 00:48:57.355388 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-10 00:48:57.355395 | orchestrator | 2026-03-10 00:48:57.355403 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-10 00:48:57.355410 | orchestrator | Tuesday 10 March 2026 00:47:14 +0000 (0:00:04.389) 0:00:46.322 ********* 2026-03-10 00:48:57.355418 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:48:57.355430 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:48:57.355438 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:48:57.355445 | orchestrator | changed: [testbed-manager] 2026-03-10 00:48:57.355452 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:48:57.355460 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:48:57.355467 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:48:57.355474 | orchestrator | 2026-03-10 00:48:57.355481 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-10 00:48:57.355489 | orchestrator | Tuesday 10 March 2026 00:47:19 +0000 (0:00:05.118) 0:00:51.441 ********* 2026-03-10 00:48:57.355496 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:48:57.355504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:48:57.355516 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:48:57.355529 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:48:57.355537 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.355554 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:48:57.355566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:48:57.355574 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.355582 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:48:57.355590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:48:57.355602 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:48:57.355609 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:48:57.355621 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:48:57.355633 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:48:57.355641 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:48:57.355649 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:48:57.355656 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.355672 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.355680 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.355687 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.355695 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.355703 | orchestrator | 2026-03-10 00:48:57.355710 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-10 00:48:57.355718 | orchestrator | Tuesday 10 March 2026 00:47:22 +0000 (0:00:03.277) 0:00:54.718 ********* 2026-03-10 00:48:57.355725 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-10 00:48:57.355732 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-10 00:48:57.355740 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-10 00:48:57.355747 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-10 00:48:57.355754 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-10 00:48:57.355761 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-10 00:48:57.355772 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-10 00:48:57.355780 | orchestrator | 2026-03-10 00:48:57.355788 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-10 00:48:57.355795 | orchestrator | Tuesday 10 March 2026 00:47:27 +0000 (0:00:04.511) 0:00:59.230 ********* 2026-03-10 00:48:57.355803 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-10 00:48:57.355811 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-10 00:48:57.355819 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-10 00:48:57.355827 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-10 00:48:57.355835 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-10 00:48:57.355844 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-10 00:48:57.355857 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-10 00:48:57.355866 | orchestrator | 2026-03-10 00:48:57.355874 | orchestrator | TASK [common : Check common containers] **************************************** 2026-03-10 00:48:57.355882 | orchestrator | Tuesday 10 March 2026 00:47:30 +0000 (0:00:03.696) 0:01:02.926 ********* 2026-03-10 00:48:57.355891 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:48:57.355906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:48:57.355915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:48:57.355924 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.355932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.355951 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:48:57.355960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:48:57.355973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.355985 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.355994 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:48:57.356003 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:48:57.356011 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.356024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.356033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.356046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.356059 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.356126 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.356138 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.356145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.356153 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.356189 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:48:57.356197 | orchestrator | 2026-03-10 00:48:57.356211 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-10 00:48:57.356225 | orchestrator | Tuesday 10 March 2026 00:47:34 +0000 (0:00:03.924) 0:01:06.851 ********* 2026-03-10 00:48:57.356233 | orchestrator | changed: [testbed-manager] 2026-03-10 00:48:57.356241 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:48:57.356248 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:48:57.356256 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:48:57.356263 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:48:57.356271 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:48:57.356279 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:48:57.356286 | orchestrator | 2026-03-10 00:48:57.356294 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-10 00:48:57.356301 | orchestrator | Tuesday 10 March 2026 00:47:36 +0000 (0:00:01.913) 0:01:08.764 ********* 2026-03-10 00:48:57.356309 | orchestrator | changed: [testbed-manager] 2026-03-10 00:48:57.356316 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:48:57.356324 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:48:57.356331 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:48:57.356339 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:48:57.356346 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:48:57.356354 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:48:57.356361 | orchestrator | 2026-03-10 00:48:57.356369 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-10 00:48:57.356376 | orchestrator | Tuesday 10 March 2026 00:47:38 +0000 (0:00:01.299) 0:01:10.064 ********* 2026-03-10 00:48:57.356384 | orchestrator | 2026-03-10 00:48:57.356391 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-10 00:48:57.356399 | orchestrator | Tuesday 10 March 2026 00:47:38 +0000 (0:00:00.077) 0:01:10.142 ********* 2026-03-10 00:48:57.356406 | orchestrator | 2026-03-10 00:48:57.356414 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-10 00:48:57.356421 | orchestrator | Tuesday 10 March 2026 00:47:38 +0000 (0:00:00.083) 0:01:10.225 ********* 2026-03-10 00:48:57.356429 | orchestrator | 2026-03-10 00:48:57.356436 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-10 00:48:57.356444 | orchestrator | Tuesday 10 March 2026 00:47:38 +0000 (0:00:00.069) 0:01:10.295 ********* 2026-03-10 00:48:57.356451 | orchestrator | 2026-03-10 00:48:57.356459 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-10 00:48:57.356467 | orchestrator | Tuesday 10 March 2026 00:47:38 +0000 (0:00:00.246) 0:01:10.541 ********* 2026-03-10 00:48:57.356474 | orchestrator | 2026-03-10 00:48:57.356486 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-10 00:48:57.356493 | orchestrator | Tuesday 10 March 2026 00:47:38 +0000 (0:00:00.073) 0:01:10.615 ********* 2026-03-10 00:48:57.356501 | orchestrator | 2026-03-10 00:48:57.356509 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-10 00:48:57.356516 | orchestrator | Tuesday 10 March 2026 00:47:38 +0000 (0:00:00.065) 0:01:10.680 ********* 2026-03-10 00:48:57.356524 | orchestrator | 2026-03-10 00:48:57.356531 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-10 00:48:57.356539 | orchestrator | Tuesday 10 March 2026 00:47:38 +0000 (0:00:00.087) 0:01:10.767 ********* 2026-03-10 00:48:57.356547 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:48:57.356554 | orchestrator | changed: [testbed-manager] 2026-03-10 00:48:57.356561 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:48:57.356569 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:48:57.356577 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:48:57.356584 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:48:57.356592 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:48:57.356599 | orchestrator | 2026-03-10 00:48:57.356607 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-10 00:48:57.356615 | orchestrator | Tuesday 10 March 2026 00:48:11 +0000 (0:00:32.579) 0:01:43.347 ********* 2026-03-10 00:48:57.356623 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:48:57.356630 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:48:57.356642 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:48:57.356650 | orchestrator | changed: [testbed-manager] 2026-03-10 00:48:57.356657 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:48:57.356665 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:48:57.356672 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:48:57.356680 | orchestrator | 2026-03-10 00:48:57.356688 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-10 00:48:57.356695 | orchestrator | Tuesday 10 March 2026 00:48:42 +0000 (0:00:31.537) 0:02:14.885 ********* 2026-03-10 00:48:57.356703 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:48:57.356710 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:48:57.356718 | orchestrator | ok: [testbed-manager] 2026-03-10 00:48:57.356725 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:48:57.356733 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:48:57.356741 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:48:57.356748 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:48:57.356755 | orchestrator | 2026-03-10 00:48:57.356763 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-10 00:48:57.356771 | orchestrator | Tuesday 10 March 2026 00:48:45 +0000 (0:00:02.324) 0:02:17.209 ********* 2026-03-10 00:48:57.356778 | orchestrator | changed: [testbed-manager] 2026-03-10 00:48:57.356786 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:48:57.356793 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:48:57.356801 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:48:57.356809 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:48:57.356816 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:48:57.356824 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:48:57.356831 | orchestrator | 2026-03-10 00:48:57.356839 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:48:57.356888 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-10 00:48:57.356899 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-10 00:48:57.356912 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-10 00:48:57.356920 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-10 00:48:57.356927 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-10 00:48:57.356934 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-10 00:48:57.356942 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-10 00:48:57.356949 | orchestrator | 2026-03-10 00:48:57.356956 | orchestrator | 2026-03-10 00:48:57.356964 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:48:57.356971 | orchestrator | Tuesday 10 March 2026 00:48:55 +0000 (0:00:10.298) 0:02:27.508 ********* 2026-03-10 00:48:57.356979 | orchestrator | =============================================================================== 2026-03-10 00:48:57.356986 | orchestrator | common : Restart fluentd container ------------------------------------- 32.58s 2026-03-10 00:48:57.356993 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 31.54s 2026-03-10 00:48:57.357000 | orchestrator | common : Copying over config.json files for services ------------------- 10.78s 2026-03-10 00:48:57.357008 | orchestrator | common : Restart cron container ---------------------------------------- 10.30s 2026-03-10 00:48:57.357015 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 6.51s 2026-03-10 00:48:57.357028 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 5.12s 2026-03-10 00:48:57.357035 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 4.51s 2026-03-10 00:48:57.357042 | orchestrator | common : Copying over cron logrotate config file ------------------------ 4.39s 2026-03-10 00:48:57.357053 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.36s 2026-03-10 00:48:57.357061 | orchestrator | common : Check common containers ---------------------------------------- 3.92s 2026-03-10 00:48:57.357068 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.70s 2026-03-10 00:48:57.357076 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.59s 2026-03-10 00:48:57.357083 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.28s 2026-03-10 00:48:57.357090 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.06s 2026-03-10 00:48:57.357098 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.32s 2026-03-10 00:48:57.357105 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.21s 2026-03-10 00:48:57.357112 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 2.07s 2026-03-10 00:48:57.357119 | orchestrator | common : Creating log volume -------------------------------------------- 1.91s 2026-03-10 00:48:57.357126 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.76s 2026-03-10 00:48:57.357134 | orchestrator | common : Find custom fluentd format config files ------------------------ 1.36s 2026-03-10 00:48:57.357141 | orchestrator | 2026-03-10 00:48:57 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:48:57.357148 | orchestrator | 2026-03-10 00:48:57 | INFO  | Task b254e54b-689b-4c94-9836-06e0dca3f2e8 is in state STARTED 2026-03-10 00:48:57.357176 | orchestrator | 2026-03-10 00:48:57 | INFO  | Task 9add44aa-c10e-4c72-82f2-99b0116bd2d8 is in state STARTED 2026-03-10 00:48:57.357184 | orchestrator | 2026-03-10 00:48:57 | INFO  | Task 7865384a-cc5f-460d-8233-07a791c3ad56 is in state STARTED 2026-03-10 00:48:57.357192 | orchestrator | 2026-03-10 00:48:57 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:48:57.357199 | orchestrator | 2026-03-10 00:48:57 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:48:57.357206 | orchestrator | 2026-03-10 00:48:57 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:00.399957 | orchestrator | 2026-03-10 00:49:00 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:49:00.400357 | orchestrator | 2026-03-10 00:49:00 | INFO  | Task b254e54b-689b-4c94-9836-06e0dca3f2e8 is in state STARTED 2026-03-10 00:49:00.401390 | orchestrator | 2026-03-10 00:49:00 | INFO  | Task 9add44aa-c10e-4c72-82f2-99b0116bd2d8 is in state STARTED 2026-03-10 00:49:00.402144 | orchestrator | 2026-03-10 00:49:00 | INFO  | Task 7865384a-cc5f-460d-8233-07a791c3ad56 is in state STARTED 2026-03-10 00:49:00.403397 | orchestrator | 2026-03-10 00:49:00 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:49:00.406072 | orchestrator | 2026-03-10 00:49:00 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:49:00.406136 | orchestrator | 2026-03-10 00:49:00 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:03.432899 | orchestrator | 2026-03-10 00:49:03 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:49:03.433006 | orchestrator | 2026-03-10 00:49:03 | INFO  | Task b254e54b-689b-4c94-9836-06e0dca3f2e8 is in state STARTED 2026-03-10 00:49:03.433463 | orchestrator | 2026-03-10 00:49:03 | INFO  | Task 9add44aa-c10e-4c72-82f2-99b0116bd2d8 is in state STARTED 2026-03-10 00:49:03.434214 | orchestrator | 2026-03-10 00:49:03 | INFO  | Task 7865384a-cc5f-460d-8233-07a791c3ad56 is in state STARTED 2026-03-10 00:49:03.434665 | orchestrator | 2026-03-10 00:49:03 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:49:03.435446 | orchestrator | 2026-03-10 00:49:03 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:49:03.435483 | orchestrator | 2026-03-10 00:49:03 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:06.462474 | orchestrator | 2026-03-10 00:49:06 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:49:06.462717 | orchestrator | 2026-03-10 00:49:06 | INFO  | Task b254e54b-689b-4c94-9836-06e0dca3f2e8 is in state STARTED 2026-03-10 00:49:06.465946 | orchestrator | 2026-03-10 00:49:06 | INFO  | Task 9add44aa-c10e-4c72-82f2-99b0116bd2d8 is in state STARTED 2026-03-10 00:49:06.466506 | orchestrator | 2026-03-10 00:49:06 | INFO  | Task 7865384a-cc5f-460d-8233-07a791c3ad56 is in state STARTED 2026-03-10 00:49:06.467379 | orchestrator | 2026-03-10 00:49:06 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:49:06.468270 | orchestrator | 2026-03-10 00:49:06 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:49:06.468385 | orchestrator | 2026-03-10 00:49:06 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:09.506399 | orchestrator | 2026-03-10 00:49:09 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:49:09.507019 | orchestrator | 2026-03-10 00:49:09 | INFO  | Task b254e54b-689b-4c94-9836-06e0dca3f2e8 is in state STARTED 2026-03-10 00:49:09.508403 | orchestrator | 2026-03-10 00:49:09 | INFO  | Task 9add44aa-c10e-4c72-82f2-99b0116bd2d8 is in state STARTED 2026-03-10 00:49:09.509841 | orchestrator | 2026-03-10 00:49:09 | INFO  | Task 7865384a-cc5f-460d-8233-07a791c3ad56 is in state STARTED 2026-03-10 00:49:09.511065 | orchestrator | 2026-03-10 00:49:09 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:49:09.512342 | orchestrator | 2026-03-10 00:49:09 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:49:09.512394 | orchestrator | 2026-03-10 00:49:09 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:12.549546 | orchestrator | 2026-03-10 00:49:12 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:49:12.549910 | orchestrator | 2026-03-10 00:49:12 | INFO  | Task b254e54b-689b-4c94-9836-06e0dca3f2e8 is in state STARTED 2026-03-10 00:49:12.551234 | orchestrator | 2026-03-10 00:49:12 | INFO  | Task 9add44aa-c10e-4c72-82f2-99b0116bd2d8 is in state STARTED 2026-03-10 00:49:12.553014 | orchestrator | 2026-03-10 00:49:12 | INFO  | Task 7865384a-cc5f-460d-8233-07a791c3ad56 is in state STARTED 2026-03-10 00:49:12.553728 | orchestrator | 2026-03-10 00:49:12 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:49:12.555203 | orchestrator | 2026-03-10 00:49:12 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:49:12.555228 | orchestrator | 2026-03-10 00:49:12 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:15.625133 | orchestrator | 2026-03-10 00:49:15 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:49:15.625309 | orchestrator | 2026-03-10 00:49:15 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:49:15.625336 | orchestrator | 2026-03-10 00:49:15 | INFO  | Task b254e54b-689b-4c94-9836-06e0dca3f2e8 is in state SUCCESS 2026-03-10 00:49:15.626505 | orchestrator | 2026-03-10 00:49:15 | INFO  | Task 9add44aa-c10e-4c72-82f2-99b0116bd2d8 is in state STARTED 2026-03-10 00:49:15.630310 | orchestrator | 2026-03-10 00:49:15 | INFO  | Task 7865384a-cc5f-460d-8233-07a791c3ad56 is in state STARTED 2026-03-10 00:49:15.630374 | orchestrator | 2026-03-10 00:49:15 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:49:15.630396 | orchestrator | 2026-03-10 00:49:15 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:49:15.630413 | orchestrator | 2026-03-10 00:49:15 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:18.744779 | orchestrator | 2026-03-10 00:49:18 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:49:18.744863 | orchestrator | 2026-03-10 00:49:18 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:49:18.745210 | orchestrator | 2026-03-10 00:49:18 | INFO  | Task 9add44aa-c10e-4c72-82f2-99b0116bd2d8 is in state STARTED 2026-03-10 00:49:18.745854 | orchestrator | 2026-03-10 00:49:18 | INFO  | Task 7865384a-cc5f-460d-8233-07a791c3ad56 is in state STARTED 2026-03-10 00:49:18.746563 | orchestrator | 2026-03-10 00:49:18 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:49:18.747228 | orchestrator | 2026-03-10 00:49:18 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:49:18.747252 | orchestrator | 2026-03-10 00:49:18 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:21.827791 | orchestrator | 2026-03-10 00:49:21 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:49:21.829195 | orchestrator | 2026-03-10 00:49:21 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:49:21.832183 | orchestrator | 2026-03-10 00:49:21 | INFO  | Task 9add44aa-c10e-4c72-82f2-99b0116bd2d8 is in state STARTED 2026-03-10 00:49:21.832818 | orchestrator | 2026-03-10 00:49:21 | INFO  | Task 7865384a-cc5f-460d-8233-07a791c3ad56 is in state STARTED 2026-03-10 00:49:21.835316 | orchestrator | 2026-03-10 00:49:21 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:49:21.838845 | orchestrator | 2026-03-10 00:49:21 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:49:21.838923 | orchestrator | 2026-03-10 00:49:21 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:24.879329 | orchestrator | 2026-03-10 00:49:24 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:49:24.879413 | orchestrator | 2026-03-10 00:49:24 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:49:24.880071 | orchestrator | 2026-03-10 00:49:24 | INFO  | Task 9add44aa-c10e-4c72-82f2-99b0116bd2d8 is in state STARTED 2026-03-10 00:49:24.883682 | orchestrator | 2026-03-10 00:49:24 | INFO  | Task 7865384a-cc5f-460d-8233-07a791c3ad56 is in state STARTED 2026-03-10 00:49:24.884173 | orchestrator | 2026-03-10 00:49:24 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:49:24.885126 | orchestrator | 2026-03-10 00:49:24 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:49:24.885346 | orchestrator | 2026-03-10 00:49:24 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:27.976309 | orchestrator | 2026-03-10 00:49:27 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:49:27.976405 | orchestrator | 2026-03-10 00:49:27 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:49:27.976443 | orchestrator | 2026-03-10 00:49:27 | INFO  | Task 9add44aa-c10e-4c72-82f2-99b0116bd2d8 is in state STARTED 2026-03-10 00:49:27.976453 | orchestrator | 2026-03-10 00:49:27 | INFO  | Task 7865384a-cc5f-460d-8233-07a791c3ad56 is in state STARTED 2026-03-10 00:49:27.976462 | orchestrator | 2026-03-10 00:49:27 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:49:27.976471 | orchestrator | 2026-03-10 00:49:27 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:49:27.976480 | orchestrator | 2026-03-10 00:49:27 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:30.995928 | orchestrator | 2026-03-10 00:49:30 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:49:30.996407 | orchestrator | 2026-03-10 00:49:30 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:49:30.997205 | orchestrator | 2026-03-10 00:49:30 | INFO  | Task 9add44aa-c10e-4c72-82f2-99b0116bd2d8 is in state STARTED 2026-03-10 00:49:30.998393 | orchestrator | 2026-03-10 00:49:30 | INFO  | Task 7865384a-cc5f-460d-8233-07a791c3ad56 is in state SUCCESS 2026-03-10 00:49:30.999594 | orchestrator | 2026-03-10 00:49:30.999649 | orchestrator | 2026-03-10 00:49:30.999663 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 00:49:30.999675 | orchestrator | 2026-03-10 00:49:30.999685 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 00:49:30.999695 | orchestrator | Tuesday 10 March 2026 00:49:01 +0000 (0:00:00.320) 0:00:00.320 ********* 2026-03-10 00:49:30.999705 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:49:30.999715 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:49:30.999725 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:49:30.999735 | orchestrator | 2026-03-10 00:49:30.999744 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 00:49:30.999754 | orchestrator | Tuesday 10 March 2026 00:49:02 +0000 (0:00:00.420) 0:00:00.741 ********* 2026-03-10 00:49:30.999764 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-10 00:49:30.999774 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-10 00:49:30.999784 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-10 00:49:30.999794 | orchestrator | 2026-03-10 00:49:30.999803 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-10 00:49:30.999813 | orchestrator | 2026-03-10 00:49:30.999822 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-10 00:49:30.999832 | orchestrator | Tuesday 10 March 2026 00:49:02 +0000 (0:00:00.515) 0:00:01.256 ********* 2026-03-10 00:49:30.999841 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:49:30.999852 | orchestrator | 2026-03-10 00:49:30.999861 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-10 00:49:30.999871 | orchestrator | Tuesday 10 March 2026 00:49:03 +0000 (0:00:00.681) 0:00:01.938 ********* 2026-03-10 00:49:30.999880 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-10 00:49:30.999890 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-10 00:49:30.999900 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-10 00:49:30.999910 | orchestrator | 2026-03-10 00:49:30.999919 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-10 00:49:30.999929 | orchestrator | Tuesday 10 March 2026 00:49:04 +0000 (0:00:00.867) 0:00:02.805 ********* 2026-03-10 00:49:30.999938 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-10 00:49:30.999948 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-10 00:49:30.999957 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-10 00:49:30.999967 | orchestrator | 2026-03-10 00:49:30.999993 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-03-10 00:49:31.000003 | orchestrator | Tuesday 10 March 2026 00:49:06 +0000 (0:00:02.511) 0:00:05.316 ********* 2026-03-10 00:49:31.000013 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:49:31.000023 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:49:31.000033 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:49:31.000043 | orchestrator | 2026-03-10 00:49:31.000052 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-10 00:49:31.000062 | orchestrator | Tuesday 10 March 2026 00:49:08 +0000 (0:00:02.142) 0:00:07.459 ********* 2026-03-10 00:49:31.000071 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:49:31.000081 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:49:31.000090 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:49:31.000100 | orchestrator | 2026-03-10 00:49:31.000109 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:49:31.000120 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:49:31.000177 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:49:31.000189 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:49:31.000201 | orchestrator | 2026-03-10 00:49:31.000213 | orchestrator | 2026-03-10 00:49:31.000224 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:49:31.000235 | orchestrator | Tuesday 10 March 2026 00:49:12 +0000 (0:00:03.588) 0:00:11.048 ********* 2026-03-10 00:49:31.000247 | orchestrator | =============================================================================== 2026-03-10 00:49:31.000258 | orchestrator | memcached : Restart memcached container --------------------------------- 3.59s 2026-03-10 00:49:31.000269 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.51s 2026-03-10 00:49:31.000280 | orchestrator | memcached : Check memcached container ----------------------------------- 2.14s 2026-03-10 00:49:31.000292 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.87s 2026-03-10 00:49:31.000312 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.68s 2026-03-10 00:49:31.000324 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.52s 2026-03-10 00:49:31.000335 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.42s 2026-03-10 00:49:31.000347 | orchestrator | 2026-03-10 00:49:31.000357 | orchestrator | 2026-03-10 00:49:31.000368 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 00:49:31.000379 | orchestrator | 2026-03-10 00:49:31.000391 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 00:49:31.000402 | orchestrator | Tuesday 10 March 2026 00:49:02 +0000 (0:00:00.323) 0:00:00.323 ********* 2026-03-10 00:49:31.000413 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:49:31.000424 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:49:31.000435 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:49:31.000446 | orchestrator | 2026-03-10 00:49:31.000458 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 00:49:31.000482 | orchestrator | Tuesday 10 March 2026 00:49:02 +0000 (0:00:00.380) 0:00:00.704 ********* 2026-03-10 00:49:31.000494 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-10 00:49:31.000505 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-10 00:49:31.000517 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-10 00:49:31.000528 | orchestrator | 2026-03-10 00:49:31.000540 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-10 00:49:31.000550 | orchestrator | 2026-03-10 00:49:31.000559 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-10 00:49:31.000576 | orchestrator | Tuesday 10 March 2026 00:49:03 +0000 (0:00:00.595) 0:00:01.300 ********* 2026-03-10 00:49:31.000585 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:49:31.000595 | orchestrator | 2026-03-10 00:49:31.000605 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-10 00:49:31.000614 | orchestrator | Tuesday 10 March 2026 00:49:03 +0000 (0:00:00.733) 0:00:02.033 ********* 2026-03-10 00:49:31.000626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-10 00:49:31.000644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-10 00:49:31.000655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-10 00:49:31.000666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-10 00:49:31.000677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-10 00:49:31.000695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-10 00:49:31.000711 | orchestrator | 2026-03-10 00:49:31.000721 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-10 00:49:31.000731 | orchestrator | Tuesday 10 March 2026 00:49:05 +0000 (0:00:01.261) 0:00:03.295 ********* 2026-03-10 00:49:31.000741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-10 00:49:31.000752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-10 00:49:31.000767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-10 00:49:31.000777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-10 00:49:31.000787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-10 00:49:31.000805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-10 00:49:31.000820 | orchestrator | 2026-03-10 00:49:31.000830 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-10 00:49:31.000840 | orchestrator | Tuesday 10 March 2026 00:49:08 +0000 (0:00:03.266) 0:00:06.561 ********* 2026-03-10 00:49:31.000850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-10 00:49:31.000860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-10 00:49:31.000874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-10 00:49:31.000884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-10 00:49:31.000894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-10 00:49:31.000911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-10 00:49:31.000927 | orchestrator | 2026-03-10 00:49:31.000936 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-03-10 00:49:31.000946 | orchestrator | Tuesday 10 March 2026 00:49:11 +0000 (0:00:03.333) 0:00:09.895 ********* 2026-03-10 00:49:31.000956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-10 00:49:31.000966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-10 00:49:31.000980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-10 00:49:31.000990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-10 00:49:31.001001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-10 00:49:31.001022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-10 00:49:31.001032 | orchestrator | 2026-03-10 00:49:31.001042 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-10 00:49:31.001052 | orchestrator | Tuesday 10 March 2026 00:49:13 +0000 (0:00:02.270) 0:00:12.166 ********* 2026-03-10 00:49:31.001062 | orchestrator | 2026-03-10 00:49:31.001072 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-10 00:49:31.001081 | orchestrator | Tuesday 10 March 2026 00:49:14 +0000 (0:00:00.172) 0:00:12.338 ********* 2026-03-10 00:49:31.001091 | orchestrator | 2026-03-10 00:49:31.001100 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-10 00:49:31.001110 | orchestrator | Tuesday 10 March 2026 00:49:14 +0000 (0:00:00.147) 0:00:12.485 ********* 2026-03-10 00:49:31.001120 | orchestrator | 2026-03-10 00:49:31.001153 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-10 00:49:31.001163 | orchestrator | Tuesday 10 March 2026 00:49:14 +0000 (0:00:00.068) 0:00:12.554 ********* 2026-03-10 00:49:31.001173 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:49:31.001183 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:49:31.001192 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:49:31.001202 | orchestrator | 2026-03-10 00:49:31.001211 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-10 00:49:31.001221 | orchestrator | Tuesday 10 March 2026 00:49:19 +0000 (0:00:04.834) 0:00:17.389 ********* 2026-03-10 00:49:31.001231 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:49:31.001241 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:49:31.001250 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:49:31.001260 | orchestrator | 2026-03-10 00:49:31.001270 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:49:31.001280 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:49:31.001289 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:49:31.001299 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:49:31.001309 | orchestrator | 2026-03-10 00:49:31.001319 | orchestrator | 2026-03-10 00:49:31.001332 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:49:31.001342 | orchestrator | Tuesday 10 March 2026 00:49:29 +0000 (0:00:10.259) 0:00:27.648 ********* 2026-03-10 00:49:31.001352 | orchestrator | =============================================================================== 2026-03-10 00:49:31.001362 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.26s 2026-03-10 00:49:31.001371 | orchestrator | redis : Restart redis container ----------------------------------------- 4.83s 2026-03-10 00:49:31.001381 | orchestrator | redis : Copying over redis config files --------------------------------- 3.33s 2026-03-10 00:49:31.001390 | orchestrator | redis : Copying over default config.json files -------------------------- 3.27s 2026-03-10 00:49:31.001400 | orchestrator | redis : Check redis containers ------------------------------------------ 2.27s 2026-03-10 00:49:31.001415 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.26s 2026-03-10 00:49:31.001425 | orchestrator | redis : include_tasks --------------------------------------------------- 0.73s 2026-03-10 00:49:31.001434 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.60s 2026-03-10 00:49:31.001444 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.39s 2026-03-10 00:49:31.001453 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.38s 2026-03-10 00:49:31.001463 | orchestrator | 2026-03-10 00:49:30 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:49:31.001472 | orchestrator | 2026-03-10 00:49:30 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:49:31.001482 | orchestrator | 2026-03-10 00:49:30 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:34.066270 | orchestrator | 2026-03-10 00:49:34 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:49:34.079957 | orchestrator | 2026-03-10 00:49:34 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:49:34.080045 | orchestrator | 2026-03-10 00:49:34 | INFO  | Task 9add44aa-c10e-4c72-82f2-99b0116bd2d8 is in state STARTED 2026-03-10 00:49:34.080056 | orchestrator | 2026-03-10 00:49:34 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:49:34.080064 | orchestrator | 2026-03-10 00:49:34 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:49:34.080074 | orchestrator | 2026-03-10 00:49:34 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:37.155456 | orchestrator | 2026-03-10 00:49:37 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:49:37.159869 | orchestrator | 2026-03-10 00:49:37 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:49:37.159937 | orchestrator | 2026-03-10 00:49:37 | INFO  | Task 9add44aa-c10e-4c72-82f2-99b0116bd2d8 is in state STARTED 2026-03-10 00:49:37.159947 | orchestrator | 2026-03-10 00:49:37 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:49:37.160770 | orchestrator | 2026-03-10 00:49:37 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:49:37.160854 | orchestrator | 2026-03-10 00:49:37 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:40.205878 | orchestrator | 2026-03-10 00:49:40 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:49:40.205986 | orchestrator | 2026-03-10 00:49:40 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:49:40.206003 | orchestrator | 2026-03-10 00:49:40 | INFO  | Task 9add44aa-c10e-4c72-82f2-99b0116bd2d8 is in state STARTED 2026-03-10 00:49:40.207548 | orchestrator | 2026-03-10 00:49:40 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:49:40.207595 | orchestrator | 2026-03-10 00:49:40 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:49:40.207616 | orchestrator | 2026-03-10 00:49:40 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:43.241202 | orchestrator | 2026-03-10 00:49:43 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:49:43.241273 | orchestrator | 2026-03-10 00:49:43 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:49:43.243094 | orchestrator | 2026-03-10 00:49:43 | INFO  | Task 9add44aa-c10e-4c72-82f2-99b0116bd2d8 is in state STARTED 2026-03-10 00:49:43.244949 | orchestrator | 2026-03-10 00:49:43 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:49:43.246229 | orchestrator | 2026-03-10 00:49:43 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:49:43.246307 | orchestrator | 2026-03-10 00:49:43 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:46.332207 | orchestrator | 2026-03-10 00:49:46 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:49:46.336867 | orchestrator | 2026-03-10 00:49:46 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:49:46.337311 | orchestrator | 2026-03-10 00:49:46 | INFO  | Task 9add44aa-c10e-4c72-82f2-99b0116bd2d8 is in state STARTED 2026-03-10 00:49:46.337969 | orchestrator | 2026-03-10 00:49:46 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:49:46.340977 | orchestrator | 2026-03-10 00:49:46 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:49:46.341006 | orchestrator | 2026-03-10 00:49:46 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:49.385806 | orchestrator | 2026-03-10 00:49:49 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:49:49.385908 | orchestrator | 2026-03-10 00:49:49 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:49:49.385924 | orchestrator | 2026-03-10 00:49:49 | INFO  | Task 9add44aa-c10e-4c72-82f2-99b0116bd2d8 is in state STARTED 2026-03-10 00:49:49.385935 | orchestrator | 2026-03-10 00:49:49 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:49:49.385947 | orchestrator | 2026-03-10 00:49:49 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:49:49.385958 | orchestrator | 2026-03-10 00:49:49 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:52.515811 | orchestrator | 2026-03-10 00:49:52 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:49:52.519286 | orchestrator | 2026-03-10 00:49:52 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:49:52.519682 | orchestrator | 2026-03-10 00:49:52 | INFO  | Task 9add44aa-c10e-4c72-82f2-99b0116bd2d8 is in state STARTED 2026-03-10 00:49:52.520562 | orchestrator | 2026-03-10 00:49:52 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:49:52.521256 | orchestrator | 2026-03-10 00:49:52 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:49:52.524780 | orchestrator | 2026-03-10 00:49:52 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:55.584933 | orchestrator | 2026-03-10 00:49:55 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:49:55.585425 | orchestrator | 2026-03-10 00:49:55 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:49:55.586258 | orchestrator | 2026-03-10 00:49:55 | INFO  | Task 9add44aa-c10e-4c72-82f2-99b0116bd2d8 is in state STARTED 2026-03-10 00:49:55.587143 | orchestrator | 2026-03-10 00:49:55 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:49:55.588017 | orchestrator | 2026-03-10 00:49:55 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:49:55.588055 | orchestrator | 2026-03-10 00:49:55 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:58.617125 | orchestrator | 2026-03-10 00:49:58 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:49:58.618496 | orchestrator | 2026-03-10 00:49:58 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:49:58.619072 | orchestrator | 2026-03-10 00:49:58 | INFO  | Task 9add44aa-c10e-4c72-82f2-99b0116bd2d8 is in state STARTED 2026-03-10 00:49:58.620737 | orchestrator | 2026-03-10 00:49:58 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:49:58.621622 | orchestrator | 2026-03-10 00:49:58 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:49:58.621637 | orchestrator | 2026-03-10 00:49:58 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:01.794738 | orchestrator | 2026-03-10 00:50:01 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:50:01.794790 | orchestrator | 2026-03-10 00:50:01 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:50:01.794796 | orchestrator | 2026-03-10 00:50:01 | INFO  | Task 9add44aa-c10e-4c72-82f2-99b0116bd2d8 is in state STARTED 2026-03-10 00:50:01.794801 | orchestrator | 2026-03-10 00:50:01 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:50:01.794807 | orchestrator | 2026-03-10 00:50:01 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:50:01.794822 | orchestrator | 2026-03-10 00:50:01 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:04.798505 | orchestrator | 2026-03-10 00:50:04 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:50:04.800604 | orchestrator | 2026-03-10 00:50:04 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:50:04.801115 | orchestrator | 2026-03-10 00:50:04 | INFO  | Task 9add44aa-c10e-4c72-82f2-99b0116bd2d8 is in state STARTED 2026-03-10 00:50:04.803032 | orchestrator | 2026-03-10 00:50:04 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:50:04.804069 | orchestrator | 2026-03-10 00:50:04 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:50:04.804136 | orchestrator | 2026-03-10 00:50:04 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:07.888411 | orchestrator | 2026-03-10 00:50:07 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:50:07.890515 | orchestrator | 2026-03-10 00:50:07 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:50:07.891634 | orchestrator | 2026-03-10 00:50:07 | INFO  | Task 9add44aa-c10e-4c72-82f2-99b0116bd2d8 is in state STARTED 2026-03-10 00:50:07.894056 | orchestrator | 2026-03-10 00:50:07 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:50:07.895521 | orchestrator | 2026-03-10 00:50:07 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:50:07.895556 | orchestrator | 2026-03-10 00:50:07 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:10.924508 | orchestrator | 2026-03-10 00:50:10 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:50:10.926000 | orchestrator | 2026-03-10 00:50:10 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:50:10.928453 | orchestrator | 2026-03-10 00:50:10 | INFO  | Task 9add44aa-c10e-4c72-82f2-99b0116bd2d8 is in state STARTED 2026-03-10 00:50:10.930255 | orchestrator | 2026-03-10 00:50:10 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:50:10.931793 | orchestrator | 2026-03-10 00:50:10 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:50:10.931921 | orchestrator | 2026-03-10 00:50:10 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:13.971920 | orchestrator | 2026-03-10 00:50:13 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:50:13.972536 | orchestrator | 2026-03-10 00:50:13 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:50:13.974192 | orchestrator | 2026-03-10 00:50:13 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:50:13.975314 | orchestrator | 2026-03-10 00:50:13 | INFO  | Task 9add44aa-c10e-4c72-82f2-99b0116bd2d8 is in state SUCCESS 2026-03-10 00:50:13.978160 | orchestrator | 2026-03-10 00:50:13.978193 | orchestrator | 2026-03-10 00:50:13.978198 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 00:50:13.978204 | orchestrator | 2026-03-10 00:50:13.978208 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 00:50:13.978213 | orchestrator | Tuesday 10 March 2026 00:49:02 +0000 (0:00:00.307) 0:00:00.307 ********* 2026-03-10 00:50:13.978217 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:50:13.978222 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:50:13.978226 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:50:13.978229 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:50:13.978233 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:50:13.978237 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:50:13.978241 | orchestrator | 2026-03-10 00:50:13.978245 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 00:50:13.978249 | orchestrator | Tuesday 10 March 2026 00:49:03 +0000 (0:00:00.991) 0:00:01.298 ********* 2026-03-10 00:50:13.978253 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-10 00:50:13.978257 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-10 00:50:13.978261 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-10 00:50:13.978265 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-10 00:50:13.978269 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-10 00:50:13.978273 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-10 00:50:13.978277 | orchestrator | 2026-03-10 00:50:13.978281 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-10 00:50:13.978285 | orchestrator | 2026-03-10 00:50:13.978289 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-10 00:50:13.978293 | orchestrator | Tuesday 10 March 2026 00:49:03 +0000 (0:00:00.794) 0:00:02.093 ********* 2026-03-10 00:50:13.978301 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:50:13.978306 | orchestrator | 2026-03-10 00:50:13.978310 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-10 00:50:13.978314 | orchestrator | Tuesday 10 March 2026 00:49:05 +0000 (0:00:01.664) 0:00:03.757 ********* 2026-03-10 00:50:13.978318 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-10 00:50:13.978322 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-10 00:50:13.978326 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-10 00:50:13.978330 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-10 00:50:13.978334 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-10 00:50:13.978338 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-10 00:50:13.978342 | orchestrator | 2026-03-10 00:50:13.978346 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-10 00:50:13.978350 | orchestrator | Tuesday 10 March 2026 00:49:07 +0000 (0:00:01.558) 0:00:05.315 ********* 2026-03-10 00:50:13.978354 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-10 00:50:13.978358 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-10 00:50:13.978370 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-10 00:50:13.978374 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-10 00:50:13.978378 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-10 00:50:13.978382 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-10 00:50:13.978386 | orchestrator | 2026-03-10 00:50:13.978390 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-10 00:50:13.978394 | orchestrator | Tuesday 10 March 2026 00:49:09 +0000 (0:00:02.238) 0:00:07.554 ********* 2026-03-10 00:50:13.978398 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-10 00:50:13.978402 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:50:13.978406 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-10 00:50:13.978410 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:50:13.978414 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-10 00:50:13.978418 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:50:13.978422 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-10 00:50:13.978426 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:50:13.978430 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-10 00:50:13.978434 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:50:13.978438 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-10 00:50:13.978442 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:50:13.978446 | orchestrator | 2026-03-10 00:50:13.978450 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-10 00:50:13.978454 | orchestrator | Tuesday 10 March 2026 00:49:11 +0000 (0:00:01.669) 0:00:09.224 ********* 2026-03-10 00:50:13.978458 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:50:13.978462 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:50:13.978466 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:50:13.978469 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:50:13.978475 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:50:13.978481 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:50:13.978488 | orchestrator | 2026-03-10 00:50:13.978492 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-10 00:50:13.978496 | orchestrator | Tuesday 10 March 2026 00:49:12 +0000 (0:00:01.383) 0:00:10.607 ********* 2026-03-10 00:50:13.978509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-10 00:50:13.978515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-10 00:50:13.978522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-10 00:50:13.978530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-10 00:50:13.978534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-10 00:50:13.978538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-10 00:50:13.978545 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-10 00:50:13.978550 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-10 00:50:13.978556 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-10 00:50:13.978561 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-10 00:50:13.978567 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-10 00:50:13.978574 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-10 00:50:13.978578 | orchestrator | 2026-03-10 00:50:13.978582 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-10 00:50:13.978587 | orchestrator | Tuesday 10 March 2026 00:49:14 +0000 (0:00:02.401) 0:00:13.009 ********* 2026-03-10 00:50:13.978591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-10 00:50:13.978600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-10 00:50:13.978606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-10 00:50:13.978613 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-10 00:50:13.978620 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-10 00:50:13.978633 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-10 00:50:13.978643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-10 00:50:13.978658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-10 00:50:13.978664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-10 00:50:13.978671 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-10 00:50:13.978682 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-10 00:50:13.978689 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-10 00:50:13.978699 | orchestrator | 2026-03-10 00:50:13.978706 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-10 00:50:13.978712 | orchestrator | Tuesday 10 March 2026 00:49:19 +0000 (0:00:04.212) 0:00:17.221 ********* 2026-03-10 00:50:13.978719 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:50:13.978726 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:50:13.978732 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:50:13.978739 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:50:13.978745 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:50:13.978752 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:50:13.978758 | orchestrator | 2026-03-10 00:50:13.978765 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-03-10 00:50:13.978772 | orchestrator | Tuesday 10 March 2026 00:49:21 +0000 (0:00:02.099) 0:00:19.321 ********* 2026-03-10 00:50:13.978782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-10 00:50:13.978790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-10 00:50:13.978796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-10 00:50:13.978806 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-10 00:50:13.978817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-10 00:50:13.978827 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-10 00:50:13.978834 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-10 00:50:13.978840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-10 00:50:13.978847 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-10 00:50:13.978898 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-10 00:50:13.978916 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-10 00:50:13.978927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-10 00:50:13.978934 | orchestrator | 2026-03-10 00:50:13.978941 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-10 00:50:13.978947 | orchestrator | Tuesday 10 March 2026 00:49:25 +0000 (0:00:04.199) 0:00:23.520 ********* 2026-03-10 00:50:13.978954 | orchestrator | 2026-03-10 00:50:13.978961 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-10 00:50:13.978968 | orchestrator | Tuesday 10 March 2026 00:49:25 +0000 (0:00:00.384) 0:00:23.904 ********* 2026-03-10 00:50:13.978974 | orchestrator | 2026-03-10 00:50:13.978980 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-10 00:50:13.978984 | orchestrator | Tuesday 10 March 2026 00:49:26 +0000 (0:00:00.394) 0:00:24.299 ********* 2026-03-10 00:50:13.978988 | orchestrator | 2026-03-10 00:50:13.978992 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-10 00:50:13.978996 | orchestrator | Tuesday 10 March 2026 00:49:26 +0000 (0:00:00.219) 0:00:24.518 ********* 2026-03-10 00:50:13.978999 | orchestrator | 2026-03-10 00:50:13.979003 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-10 00:50:13.979007 | orchestrator | Tuesday 10 March 2026 00:49:26 +0000 (0:00:00.278) 0:00:24.796 ********* 2026-03-10 00:50:13.979011 | orchestrator | 2026-03-10 00:50:13.979015 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-10 00:50:13.979019 | orchestrator | Tuesday 10 March 2026 00:49:26 +0000 (0:00:00.146) 0:00:24.943 ********* 2026-03-10 00:50:13.979023 | orchestrator | 2026-03-10 00:50:13.979030 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-10 00:50:13.979036 | orchestrator | Tuesday 10 March 2026 00:49:26 +0000 (0:00:00.143) 0:00:25.086 ********* 2026-03-10 00:50:13.979043 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:50:13.979050 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:50:13.979056 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:50:13.979063 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:50:13.979069 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:50:13.979171 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:50:13.979182 | orchestrator | 2026-03-10 00:50:13.979186 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-10 00:50:13.979196 | orchestrator | Tuesday 10 March 2026 00:49:37 +0000 (0:00:10.610) 0:00:35.697 ********* 2026-03-10 00:50:13.979200 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:50:13.979205 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:50:13.979209 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:50:13.979213 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:50:13.979221 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:50:13.979225 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:50:13.979229 | orchestrator | 2026-03-10 00:50:13.979233 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-10 00:50:13.979238 | orchestrator | Tuesday 10 March 2026 00:49:38 +0000 (0:00:01.423) 0:00:37.120 ********* 2026-03-10 00:50:13.979244 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:50:13.979254 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:50:13.979262 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:50:13.979268 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:50:13.979275 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:50:13.979282 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:50:13.979288 | orchestrator | 2026-03-10 00:50:13.979294 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-10 00:50:13.979300 | orchestrator | Tuesday 10 March 2026 00:49:48 +0000 (0:00:09.924) 0:00:47.045 ********* 2026-03-10 00:50:13.979312 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-10 00:50:13.979319 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-10 00:50:13.979326 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-10 00:50:13.979332 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-10 00:50:13.979338 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-10 00:50:13.979344 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-10 00:50:13.979350 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-10 00:50:13.979357 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-10 00:50:13.979363 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-10 00:50:13.979369 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-10 00:50:13.979375 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-10 00:50:13.979382 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-10 00:50:13.979392 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-10 00:50:13.979398 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-10 00:50:13.979405 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-10 00:50:13.979413 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-10 00:50:13.979421 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-10 00:50:13.979427 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-10 00:50:13.979438 | orchestrator | 2026-03-10 00:50:13.979444 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-10 00:50:13.979450 | orchestrator | Tuesday 10 March 2026 00:49:56 +0000 (0:00:07.699) 0:00:54.744 ********* 2026-03-10 00:50:13.979456 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-10 00:50:13.979463 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:50:13.979468 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-10 00:50:13.979475 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:50:13.979481 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-10 00:50:13.979487 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:50:13.979493 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-10 00:50:13.979499 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-10 00:50:13.979505 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-10 00:50:13.979511 | orchestrator | 2026-03-10 00:50:13.979518 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-10 00:50:13.979524 | orchestrator | Tuesday 10 March 2026 00:49:58 +0000 (0:00:02.397) 0:00:57.142 ********* 2026-03-10 00:50:13.979531 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-10 00:50:13.979537 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:50:13.979543 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-10 00:50:13.979550 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:50:13.979556 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-10 00:50:13.979561 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:50:13.979568 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-10 00:50:13.979574 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-10 00:50:13.979580 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-10 00:50:13.979586 | orchestrator | 2026-03-10 00:50:13.979592 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-10 00:50:13.979598 | orchestrator | Tuesday 10 March 2026 00:50:02 +0000 (0:00:03.761) 0:01:00.904 ********* 2026-03-10 00:50:13.979604 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:50:13.979610 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:50:13.979616 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:50:13.979622 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:50:13.979628 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:50:13.979635 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:50:13.979641 | orchestrator | 2026-03-10 00:50:13.979646 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:50:13.979653 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-10 00:50:13.979664 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-10 00:50:13.979671 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-10 00:50:13.979678 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-10 00:50:13.979685 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-10 00:50:13.979692 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-10 00:50:13.979699 | orchestrator | 2026-03-10 00:50:13.979706 | orchestrator | 2026-03-10 00:50:13.979713 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:50:13.979726 | orchestrator | Tuesday 10 March 2026 00:50:12 +0000 (0:00:09.334) 0:01:10.239 ********* 2026-03-10 00:50:13.979733 | orchestrator | =============================================================================== 2026-03-10 00:50:13.979740 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 19.26s 2026-03-10 00:50:13.979747 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.61s 2026-03-10 00:50:13.979754 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.70s 2026-03-10 00:50:13.979760 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.21s 2026-03-10 00:50:13.979767 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 4.20s 2026-03-10 00:50:13.979777 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.76s 2026-03-10 00:50:13.979784 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.40s 2026-03-10 00:50:13.979790 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.40s 2026-03-10 00:50:13.979797 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.24s 2026-03-10 00:50:13.979803 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.10s 2026-03-10 00:50:13.979810 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.67s 2026-03-10 00:50:13.979817 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.66s 2026-03-10 00:50:13.979825 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.57s 2026-03-10 00:50:13.979832 | orchestrator | module-load : Load modules ---------------------------------------------- 1.56s 2026-03-10 00:50:13.979839 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.42s 2026-03-10 00:50:13.979845 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.38s 2026-03-10 00:50:13.979850 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.99s 2026-03-10 00:50:13.979855 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.79s 2026-03-10 00:50:13.979860 | orchestrator | 2026-03-10 00:50:13 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:50:13.979864 | orchestrator | 2026-03-10 00:50:13 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:50:13.979869 | orchestrator | 2026-03-10 00:50:13 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:17.013828 | orchestrator | 2026-03-10 00:50:17 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:50:17.016101 | orchestrator | 2026-03-10 00:50:17 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:50:17.018297 | orchestrator | 2026-03-10 00:50:17 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:50:17.020046 | orchestrator | 2026-03-10 00:50:17 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:50:17.021541 | orchestrator | 2026-03-10 00:50:17 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:50:17.021837 | orchestrator | 2026-03-10 00:50:17 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:20.057270 | orchestrator | 2026-03-10 00:50:20 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:50:20.058396 | orchestrator | 2026-03-10 00:50:20 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:50:20.059577 | orchestrator | 2026-03-10 00:50:20 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:50:20.060726 | orchestrator | 2026-03-10 00:50:20 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:50:20.061788 | orchestrator | 2026-03-10 00:50:20 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:50:20.061937 | orchestrator | 2026-03-10 00:50:20 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:23.120831 | orchestrator | 2026-03-10 00:50:23 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:50:23.121371 | orchestrator | 2026-03-10 00:50:23 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:50:23.122649 | orchestrator | 2026-03-10 00:50:23 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:50:23.124477 | orchestrator | 2026-03-10 00:50:23 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:50:23.126226 | orchestrator | 2026-03-10 00:50:23 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:50:23.126481 | orchestrator | 2026-03-10 00:50:23 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:26.295275 | orchestrator | 2026-03-10 00:50:26 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:50:26.302538 | orchestrator | 2026-03-10 00:50:26 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:50:26.307455 | orchestrator | 2026-03-10 00:50:26 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:50:26.309306 | orchestrator | 2026-03-10 00:50:26 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:50:26.314294 | orchestrator | 2026-03-10 00:50:26 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:50:26.314483 | orchestrator | 2026-03-10 00:50:26 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:29.431402 | orchestrator | 2026-03-10 00:50:29 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:50:29.432609 | orchestrator | 2026-03-10 00:50:29 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:50:29.434738 | orchestrator | 2026-03-10 00:50:29 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:50:29.436103 | orchestrator | 2026-03-10 00:50:29 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:50:29.438097 | orchestrator | 2026-03-10 00:50:29 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:50:29.438128 | orchestrator | 2026-03-10 00:50:29 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:32.484477 | orchestrator | 2026-03-10 00:50:32 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:50:32.486187 | orchestrator | 2026-03-10 00:50:32 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:50:32.488040 | orchestrator | 2026-03-10 00:50:32 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:50:32.488870 | orchestrator | 2026-03-10 00:50:32 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:50:32.490649 | orchestrator | 2026-03-10 00:50:32 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:50:32.490932 | orchestrator | 2026-03-10 00:50:32 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:35.537412 | orchestrator | 2026-03-10 00:50:35 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:50:35.537696 | orchestrator | 2026-03-10 00:50:35 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:50:35.538620 | orchestrator | 2026-03-10 00:50:35 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:50:35.542717 | orchestrator | 2026-03-10 00:50:35 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:50:35.545507 | orchestrator | 2026-03-10 00:50:35 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:50:35.545561 | orchestrator | 2026-03-10 00:50:35 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:38.591506 | orchestrator | 2026-03-10 00:50:38 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:50:38.591916 | orchestrator | 2026-03-10 00:50:38 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:50:38.594751 | orchestrator | 2026-03-10 00:50:38 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:50:38.595834 | orchestrator | 2026-03-10 00:50:38 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:50:38.596712 | orchestrator | 2026-03-10 00:50:38 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:50:38.598536 | orchestrator | 2026-03-10 00:50:38 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:41.630605 | orchestrator | 2026-03-10 00:50:41 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:50:41.633786 | orchestrator | 2026-03-10 00:50:41 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:50:41.635175 | orchestrator | 2026-03-10 00:50:41 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:50:41.637458 | orchestrator | 2026-03-10 00:50:41 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:50:41.639045 | orchestrator | 2026-03-10 00:50:41 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:50:41.639125 | orchestrator | 2026-03-10 00:50:41 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:44.671261 | orchestrator | 2026-03-10 00:50:44 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:50:44.673558 | orchestrator | 2026-03-10 00:50:44 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:50:44.674767 | orchestrator | 2026-03-10 00:50:44 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:50:44.678615 | orchestrator | 2026-03-10 00:50:44 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:50:44.679608 | orchestrator | 2026-03-10 00:50:44 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:50:44.679671 | orchestrator | 2026-03-10 00:50:44 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:47.721679 | orchestrator | 2026-03-10 00:50:47 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:50:47.721755 | orchestrator | 2026-03-10 00:50:47 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:50:47.722323 | orchestrator | 2026-03-10 00:50:47 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:50:47.723375 | orchestrator | 2026-03-10 00:50:47 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:50:47.725176 | orchestrator | 2026-03-10 00:50:47 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:50:47.725216 | orchestrator | 2026-03-10 00:50:47 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:50.825493 | orchestrator | 2026-03-10 00:50:50 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:50:50.825563 | orchestrator | 2026-03-10 00:50:50 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:50:50.825570 | orchestrator | 2026-03-10 00:50:50 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:50:50.825575 | orchestrator | 2026-03-10 00:50:50 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:50:50.825580 | orchestrator | 2026-03-10 00:50:50 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:50:50.825585 | orchestrator | 2026-03-10 00:50:50 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:53.824874 | orchestrator | 2026-03-10 00:50:53 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:50:53.824972 | orchestrator | 2026-03-10 00:50:53 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:50:53.825528 | orchestrator | 2026-03-10 00:50:53 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:50:53.825886 | orchestrator | 2026-03-10 00:50:53 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:50:53.826531 | orchestrator | 2026-03-10 00:50:53 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:50:53.826552 | orchestrator | 2026-03-10 00:50:53 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:56.858775 | orchestrator | 2026-03-10 00:50:56 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:50:56.860192 | orchestrator | 2026-03-10 00:50:56 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:50:56.861394 | orchestrator | 2026-03-10 00:50:56 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:50:56.862891 | orchestrator | 2026-03-10 00:50:56 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:50:56.863979 | orchestrator | 2026-03-10 00:50:56 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:50:56.864071 | orchestrator | 2026-03-10 00:50:56 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:00.005275 | orchestrator | 2026-03-10 00:50:59 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:51:00.005360 | orchestrator | 2026-03-10 00:50:59 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:51:00.005367 | orchestrator | 2026-03-10 00:50:59 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:51:00.005371 | orchestrator | 2026-03-10 00:50:59 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:51:00.005375 | orchestrator | 2026-03-10 00:50:59 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:51:00.005380 | orchestrator | 2026-03-10 00:50:59 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:03.224753 | orchestrator | 2026-03-10 00:51:03 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:51:03.228776 | orchestrator | 2026-03-10 00:51:03 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:51:03.239530 | orchestrator | 2026-03-10 00:51:03 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:51:03.247602 | orchestrator | 2026-03-10 00:51:03 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:51:03.255952 | orchestrator | 2026-03-10 00:51:03 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:51:03.256132 | orchestrator | 2026-03-10 00:51:03 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:06.331354 | orchestrator | 2026-03-10 00:51:06 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:51:06.333090 | orchestrator | 2026-03-10 00:51:06 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:51:06.336749 | orchestrator | 2026-03-10 00:51:06 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:51:06.340719 | orchestrator | 2026-03-10 00:51:06 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:51:06.342218 | orchestrator | 2026-03-10 00:51:06 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:51:06.342967 | orchestrator | 2026-03-10 00:51:06 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:09.399744 | orchestrator | 2026-03-10 00:51:09 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:51:09.400146 | orchestrator | 2026-03-10 00:51:09 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:51:09.400785 | orchestrator | 2026-03-10 00:51:09 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:51:09.401693 | orchestrator | 2026-03-10 00:51:09 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:51:09.403014 | orchestrator | 2026-03-10 00:51:09 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:51:09.403073 | orchestrator | 2026-03-10 00:51:09 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:12.465725 | orchestrator | 2026-03-10 00:51:12 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:51:12.465777 | orchestrator | 2026-03-10 00:51:12 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:51:12.465784 | orchestrator | 2026-03-10 00:51:12 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:51:12.465790 | orchestrator | 2026-03-10 00:51:12 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:51:12.465795 | orchestrator | 2026-03-10 00:51:12 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:51:12.465800 | orchestrator | 2026-03-10 00:51:12 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:15.510003 | orchestrator | 2026-03-10 00:51:15 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:51:15.511946 | orchestrator | 2026-03-10 00:51:15 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:51:15.513721 | orchestrator | 2026-03-10 00:51:15 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:51:15.515147 | orchestrator | 2026-03-10 00:51:15 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:51:15.518203 | orchestrator | 2026-03-10 00:51:15 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:51:15.518283 | orchestrator | 2026-03-10 00:51:15 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:18.962798 | orchestrator | 2026-03-10 00:51:18 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:51:18.962880 | orchestrator | 2026-03-10 00:51:18 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:51:18.962886 | orchestrator | 2026-03-10 00:51:18 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:51:18.962890 | orchestrator | 2026-03-10 00:51:18 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:51:18.962915 | orchestrator | 2026-03-10 00:51:18 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state STARTED 2026-03-10 00:51:18.962920 | orchestrator | 2026-03-10 00:51:18 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:21.950312 | orchestrator | 2026-03-10 00:51:21 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:51:21.952257 | orchestrator | 2026-03-10 00:51:21 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:51:21.953711 | orchestrator | 2026-03-10 00:51:21 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:51:21.957648 | orchestrator | 2026-03-10 00:51:21 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:51:21.959951 | orchestrator | 2026-03-10 00:51:21 | INFO  | Task 529d41b3-3ef2-4c8e-a186-fd48e2bc75f3 is in state SUCCESS 2026-03-10 00:51:21.962972 | orchestrator | 2026-03-10 00:51:21.963055 | orchestrator | 2026-03-10 00:51:21.963074 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-10 00:51:21.963091 | orchestrator | 2026-03-10 00:51:21.963106 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-10 00:51:21.963121 | orchestrator | Tuesday 10 March 2026 00:46:28 +0000 (0:00:00.223) 0:00:00.223 ********* 2026-03-10 00:51:21.963136 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:51:21.963153 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:51:21.963168 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:51:21.963184 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:51:21.963198 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:51:21.963213 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:51:21.963228 | orchestrator | 2026-03-10 00:51:21.963245 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-10 00:51:21.963260 | orchestrator | Tuesday 10 March 2026 00:46:29 +0000 (0:00:00.683) 0:00:00.907 ********* 2026-03-10 00:51:21.963276 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:51:21.963291 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:51:21.963305 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:51:21.963320 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:21.963335 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:51:21.963349 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:51:21.963364 | orchestrator | 2026-03-10 00:51:21.963378 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-10 00:51:21.963393 | orchestrator | Tuesday 10 March 2026 00:46:30 +0000 (0:00:00.701) 0:00:01.608 ********* 2026-03-10 00:51:21.963408 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:51:21.963421 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:51:21.963435 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:51:21.963450 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:21.963463 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:51:21.963477 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:51:21.963491 | orchestrator | 2026-03-10 00:51:21.963505 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-10 00:51:21.963520 | orchestrator | Tuesday 10 March 2026 00:46:30 +0000 (0:00:00.827) 0:00:02.436 ********* 2026-03-10 00:51:21.963535 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:51:21.963550 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:51:21.963564 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:51:21.963578 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:51:21.963592 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:51:21.963607 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:51:21.963620 | orchestrator | 2026-03-10 00:51:21.963636 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-10 00:51:21.963651 | orchestrator | Tuesday 10 March 2026 00:46:33 +0000 (0:00:02.076) 0:00:04.513 ********* 2026-03-10 00:51:21.963692 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:51:21.963707 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:51:21.963723 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:51:21.963737 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:51:21.963751 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:51:21.963765 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:51:21.963779 | orchestrator | 2026-03-10 00:51:21.963794 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-10 00:51:21.963808 | orchestrator | Tuesday 10 March 2026 00:46:35 +0000 (0:00:01.968) 0:00:06.481 ********* 2026-03-10 00:51:21.963822 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:51:21.963836 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:51:21.963851 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:51:21.963865 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:51:21.963879 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:51:21.963893 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:51:21.963907 | orchestrator | 2026-03-10 00:51:21.963922 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-10 00:51:21.963936 | orchestrator | Tuesday 10 March 2026 00:46:35 +0000 (0:00:00.985) 0:00:07.467 ********* 2026-03-10 00:51:21.963951 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:51:21.963965 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:51:21.963980 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:51:21.963995 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:21.964010 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:51:21.964063 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:51:21.964078 | orchestrator | 2026-03-10 00:51:21.964092 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-10 00:51:21.964107 | orchestrator | Tuesday 10 March 2026 00:46:37 +0000 (0:00:01.119) 0:00:08.586 ********* 2026-03-10 00:51:21.964121 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:51:21.964136 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:51:21.964150 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:51:21.964165 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:21.964178 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:51:21.964193 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:51:21.964209 | orchestrator | 2026-03-10 00:51:21.964224 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-10 00:51:21.964238 | orchestrator | Tuesday 10 March 2026 00:46:37 +0000 (0:00:00.678) 0:00:09.265 ********* 2026-03-10 00:51:21.964907 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-10 00:51:21.964943 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-10 00:51:21.964957 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:51:21.964970 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-10 00:51:21.964982 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-10 00:51:21.964995 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:51:21.965007 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-10 00:51:21.965081 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-10 00:51:21.965097 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:51:21.965109 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-10 00:51:21.965141 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-10 00:51:21.965155 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:21.965167 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-10 00:51:21.965179 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-10 00:51:21.965190 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:51:21.965220 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-10 00:51:21.965234 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-10 00:51:21.965247 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:51:21.965259 | orchestrator | 2026-03-10 00:51:21.965271 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-10 00:51:21.965284 | orchestrator | Tuesday 10 March 2026 00:46:38 +0000 (0:00:00.739) 0:00:10.004 ********* 2026-03-10 00:51:21.965295 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:51:21.965308 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:51:21.965320 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:51:21.965332 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:21.965345 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:51:21.965357 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:51:21.965369 | orchestrator | 2026-03-10 00:51:21.965382 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-10 00:51:21.965397 | orchestrator | Tuesday 10 March 2026 00:46:40 +0000 (0:00:01.908) 0:00:11.912 ********* 2026-03-10 00:51:21.965409 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:51:21.965423 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:51:21.965431 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:51:21.965439 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:51:21.965447 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:51:21.965454 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:51:21.965462 | orchestrator | 2026-03-10 00:51:21.965470 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-10 00:51:21.965478 | orchestrator | Tuesday 10 March 2026 00:46:41 +0000 (0:00:00.813) 0:00:12.726 ********* 2026-03-10 00:51:21.965486 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:51:21.965494 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:51:21.965501 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:51:21.965509 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:51:21.965517 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:51:21.965524 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:51:21.965532 | orchestrator | 2026-03-10 00:51:21.965545 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-10 00:51:21.965557 | orchestrator | Tuesday 10 March 2026 00:46:47 +0000 (0:00:06.184) 0:00:18.910 ********* 2026-03-10 00:51:21.965568 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:51:21.965579 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:51:21.965596 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:51:21.965614 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:21.965627 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:51:21.965639 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:51:21.965652 | orchestrator | 2026-03-10 00:51:21.965664 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-10 00:51:21.965678 | orchestrator | Tuesday 10 March 2026 00:46:49 +0000 (0:00:02.047) 0:00:20.957 ********* 2026-03-10 00:51:21.965690 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:51:21.965704 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:51:21.965717 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:51:21.965731 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:21.965746 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:51:21.965758 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:51:21.965766 | orchestrator | 2026-03-10 00:51:21.965775 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-10 00:51:21.965784 | orchestrator | Tuesday 10 March 2026 00:46:52 +0000 (0:00:02.862) 0:00:23.820 ********* 2026-03-10 00:51:21.965792 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:51:21.965800 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:51:21.965808 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:51:21.965816 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:21.965833 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:51:21.965841 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:51:21.965849 | orchestrator | 2026-03-10 00:51:21.965857 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-10 00:51:21.965864 | orchestrator | Tuesday 10 March 2026 00:46:53 +0000 (0:00:01.333) 0:00:25.153 ********* 2026-03-10 00:51:21.965872 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-10 00:51:21.965880 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-10 00:51:21.965888 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:51:21.965895 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-10 00:51:21.965903 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-10 00:51:21.965918 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:51:21.965926 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-10 00:51:21.965934 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-10 00:51:21.965941 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:51:21.965949 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-10 00:51:21.965957 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-10 00:51:21.965965 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-10 00:51:21.965973 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-10 00:51:21.965980 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:51:21.965988 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:21.965996 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-10 00:51:21.966004 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-10 00:51:21.966086 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:51:21.966105 | orchestrator | 2026-03-10 00:51:21.966115 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-10 00:51:21.966135 | orchestrator | Tuesday 10 March 2026 00:46:57 +0000 (0:00:03.560) 0:00:28.714 ********* 2026-03-10 00:51:21.966143 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:51:21.966151 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:51:21.966159 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:51:21.966167 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:21.966175 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:51:21.966183 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:51:21.966191 | orchestrator | 2026-03-10 00:51:21.966199 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-10 00:51:21.966207 | orchestrator | Tuesday 10 March 2026 00:46:58 +0000 (0:00:01.338) 0:00:30.053 ********* 2026-03-10 00:51:21.966215 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:51:21.966223 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:51:21.966230 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:51:21.966238 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:21.966246 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:51:21.966254 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:51:21.966261 | orchestrator | 2026-03-10 00:51:21.966269 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-10 00:51:21.966277 | orchestrator | 2026-03-10 00:51:21.966285 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-10 00:51:21.966293 | orchestrator | Tuesday 10 March 2026 00:47:00 +0000 (0:00:01.699) 0:00:31.752 ********* 2026-03-10 00:51:21.966301 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:51:21.966309 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:51:21.966317 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:51:21.966325 | orchestrator | 2026-03-10 00:51:21.966333 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-10 00:51:21.966341 | orchestrator | Tuesday 10 March 2026 00:47:02 +0000 (0:00:02.086) 0:00:33.839 ********* 2026-03-10 00:51:21.966348 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:51:21.966363 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:51:21.966371 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:51:21.966379 | orchestrator | 2026-03-10 00:51:21.966387 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-10 00:51:21.966395 | orchestrator | Tuesday 10 March 2026 00:47:04 +0000 (0:00:01.799) 0:00:35.639 ********* 2026-03-10 00:51:21.966402 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:51:21.966410 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:51:21.966418 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:51:21.966426 | orchestrator | 2026-03-10 00:51:21.966434 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-10 00:51:21.966442 | orchestrator | Tuesday 10 March 2026 00:47:05 +0000 (0:00:01.373) 0:00:37.012 ********* 2026-03-10 00:51:21.966450 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:51:21.966458 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:51:21.966465 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:51:21.966473 | orchestrator | 2026-03-10 00:51:21.966481 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-10 00:51:21.966489 | orchestrator | Tuesday 10 March 2026 00:47:07 +0000 (0:00:01.578) 0:00:38.590 ********* 2026-03-10 00:51:21.966497 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:21.966505 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:51:21.966513 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:51:21.966521 | orchestrator | 2026-03-10 00:51:21.966528 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-10 00:51:21.966536 | orchestrator | Tuesday 10 March 2026 00:47:08 +0000 (0:00:01.125) 0:00:39.716 ********* 2026-03-10 00:51:21.966544 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:51:21.966552 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:51:21.966560 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:51:21.966568 | orchestrator | 2026-03-10 00:51:21.966576 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-10 00:51:21.966583 | orchestrator | Tuesday 10 March 2026 00:47:09 +0000 (0:00:00.988) 0:00:40.704 ********* 2026-03-10 00:51:21.966591 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:51:21.966599 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:51:21.966607 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:51:21.966615 | orchestrator | 2026-03-10 00:51:21.966623 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-10 00:51:21.966631 | orchestrator | Tuesday 10 March 2026 00:47:11 +0000 (0:00:01.814) 0:00:42.519 ********* 2026-03-10 00:51:21.966639 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:51:21.966647 | orchestrator | 2026-03-10 00:51:21.966655 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-10 00:51:21.966663 | orchestrator | Tuesday 10 March 2026 00:47:12 +0000 (0:00:01.039) 0:00:43.558 ********* 2026-03-10 00:51:21.966670 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:51:21.966678 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:51:21.966686 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:51:21.966694 | orchestrator | 2026-03-10 00:51:21.966702 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-10 00:51:21.966714 | orchestrator | Tuesday 10 March 2026 00:47:14 +0000 (0:00:02.609) 0:00:46.168 ********* 2026-03-10 00:51:21.966722 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:51:21.966730 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:51:21.966738 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:51:21.966746 | orchestrator | 2026-03-10 00:51:21.966753 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-10 00:51:21.966761 | orchestrator | Tuesday 10 March 2026 00:47:15 +0000 (0:00:01.288) 0:00:47.456 ********* 2026-03-10 00:51:21.966769 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:51:21.966777 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:51:21.966785 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:51:21.966793 | orchestrator | 2026-03-10 00:51:21.966801 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-10 00:51:21.966813 | orchestrator | Tuesday 10 March 2026 00:47:17 +0000 (0:00:01.518) 0:00:48.974 ********* 2026-03-10 00:51:21.966821 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:51:21.966829 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:51:21.966837 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:51:21.966845 | orchestrator | 2026-03-10 00:51:21.966853 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-10 00:51:21.966866 | orchestrator | Tuesday 10 March 2026 00:47:19 +0000 (0:00:02.339) 0:00:51.314 ********* 2026-03-10 00:51:21.966875 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:51:21.966883 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:21.966890 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:51:21.966898 | orchestrator | 2026-03-10 00:51:21.966906 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-10 00:51:21.966914 | orchestrator | Tuesday 10 March 2026 00:47:20 +0000 (0:00:00.984) 0:00:52.298 ********* 2026-03-10 00:51:21.966921 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:21.966929 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:51:21.966937 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:51:21.966945 | orchestrator | 2026-03-10 00:51:21.966953 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-10 00:51:21.966961 | orchestrator | Tuesday 10 March 2026 00:47:21 +0000 (0:00:00.793) 0:00:53.092 ********* 2026-03-10 00:51:21.966968 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:51:21.966976 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:51:21.966984 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:51:21.966992 | orchestrator | 2026-03-10 00:51:21.967000 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-10 00:51:21.967008 | orchestrator | Tuesday 10 March 2026 00:47:25 +0000 (0:00:03.423) 0:00:56.516 ********* 2026-03-10 00:51:21.967015 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:51:21.967074 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:51:21.967082 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:51:21.967090 | orchestrator | 2026-03-10 00:51:21.967098 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-10 00:51:21.967106 | orchestrator | Tuesday 10 March 2026 00:47:27 +0000 (0:00:02.793) 0:00:59.309 ********* 2026-03-10 00:51:21.967114 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:51:21.967121 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:51:21.967129 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:51:21.967137 | orchestrator | 2026-03-10 00:51:21.967145 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-10 00:51:21.967153 | orchestrator | Tuesday 10 March 2026 00:47:28 +0000 (0:00:00.955) 0:01:00.265 ********* 2026-03-10 00:51:21.967161 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-10 00:51:21.967169 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-10 00:51:21.967177 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-10 00:51:21.967186 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-10 00:51:21.967194 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-10 00:51:21.967201 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-10 00:51:21.967209 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-10 00:51:21.967223 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-10 00:51:21.967231 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-10 00:51:21.967239 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-10 00:51:21.967247 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-10 00:51:21.967255 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-10 00:51:21.967263 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:51:21.967270 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:51:21.967278 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:51:21.967286 | orchestrator | 2026-03-10 00:51:21.967298 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-10 00:51:21.967306 | orchestrator | Tuesday 10 March 2026 00:48:12 +0000 (0:00:43.873) 0:01:44.138 ********* 2026-03-10 00:51:21.967314 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:21.967322 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:51:21.967330 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:51:21.967338 | orchestrator | 2026-03-10 00:51:21.967346 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-10 00:51:21.967354 | orchestrator | Tuesday 10 March 2026 00:48:12 +0000 (0:00:00.311) 0:01:44.450 ********* 2026-03-10 00:51:21.967361 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:51:21.967369 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:51:21.967377 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:51:21.967385 | orchestrator | 2026-03-10 00:51:21.967393 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-10 00:51:21.967400 | orchestrator | Tuesday 10 March 2026 00:48:14 +0000 (0:00:01.204) 0:01:45.655 ********* 2026-03-10 00:51:21.967408 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:51:21.967416 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:51:21.967424 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:51:21.967432 | orchestrator | 2026-03-10 00:51:21.967445 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-10 00:51:21.967453 | orchestrator | Tuesday 10 March 2026 00:48:16 +0000 (0:00:02.017) 0:01:47.673 ********* 2026-03-10 00:51:21.967461 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:51:21.967469 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:51:21.967476 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:51:21.967484 | orchestrator | 2026-03-10 00:51:21.967492 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-10 00:51:21.967500 | orchestrator | Tuesday 10 March 2026 00:48:38 +0000 (0:00:22.710) 0:02:10.384 ********* 2026-03-10 00:51:21.967508 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:51:21.967516 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:51:21.967524 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:51:21.967531 | orchestrator | 2026-03-10 00:51:21.967539 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-10 00:51:21.967547 | orchestrator | Tuesday 10 March 2026 00:48:39 +0000 (0:00:00.631) 0:02:11.015 ********* 2026-03-10 00:51:21.967555 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:51:21.967563 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:51:21.967571 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:51:21.967578 | orchestrator | 2026-03-10 00:51:21.967586 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-10 00:51:21.967594 | orchestrator | Tuesday 10 March 2026 00:48:40 +0000 (0:00:00.572) 0:02:11.587 ********* 2026-03-10 00:51:21.967602 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:51:21.967610 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:51:21.967623 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:51:21.967631 | orchestrator | 2026-03-10 00:51:21.967639 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-10 00:51:21.967647 | orchestrator | Tuesday 10 March 2026 00:48:40 +0000 (0:00:00.579) 0:02:12.167 ********* 2026-03-10 00:51:21.967655 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:51:21.967662 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:51:21.967670 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:51:21.967678 | orchestrator | 2026-03-10 00:51:21.967686 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-10 00:51:21.967694 | orchestrator | Tuesday 10 March 2026 00:48:41 +0000 (0:00:01.050) 0:02:13.217 ********* 2026-03-10 00:51:21.967701 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:51:21.967709 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:51:21.967717 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:51:21.967725 | orchestrator | 2026-03-10 00:51:21.967733 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-10 00:51:21.967740 | orchestrator | Tuesday 10 March 2026 00:48:42 +0000 (0:00:00.324) 0:02:13.542 ********* 2026-03-10 00:51:21.967748 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:51:21.967756 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:51:21.967764 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:51:21.967772 | orchestrator | 2026-03-10 00:51:21.967780 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-10 00:51:21.967788 | orchestrator | Tuesday 10 March 2026 00:48:42 +0000 (0:00:00.683) 0:02:14.225 ********* 2026-03-10 00:51:21.967795 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:51:21.967803 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:51:21.967811 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:51:21.967819 | orchestrator | 2026-03-10 00:51:21.967827 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-10 00:51:21.967835 | orchestrator | Tuesday 10 March 2026 00:48:43 +0000 (0:00:00.642) 0:02:14.867 ********* 2026-03-10 00:51:21.967843 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:51:21.967851 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:51:21.967859 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:51:21.967867 | orchestrator | 2026-03-10 00:51:21.967874 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-10 00:51:21.967882 | orchestrator | Tuesday 10 March 2026 00:48:44 +0000 (0:00:01.355) 0:02:16.223 ********* 2026-03-10 00:51:21.967890 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:51:21.967898 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:51:21.967906 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:51:21.967914 | orchestrator | 2026-03-10 00:51:21.967921 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-10 00:51:21.967929 | orchestrator | Tuesday 10 March 2026 00:48:45 +0000 (0:00:00.867) 0:02:17.091 ********* 2026-03-10 00:51:21.967937 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:21.967945 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:51:21.967953 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:51:21.967961 | orchestrator | 2026-03-10 00:51:21.967969 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-10 00:51:21.967976 | orchestrator | Tuesday 10 March 2026 00:48:46 +0000 (0:00:00.426) 0:02:17.517 ********* 2026-03-10 00:51:21.967984 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:21.967992 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:51:21.968000 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:51:21.968008 | orchestrator | 2026-03-10 00:51:21.968016 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-10 00:51:21.968048 | orchestrator | Tuesday 10 March 2026 00:48:46 +0000 (0:00:00.405) 0:02:17.923 ********* 2026-03-10 00:51:21.968056 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:51:21.968064 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:51:21.968072 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:51:21.968080 | orchestrator | 2026-03-10 00:51:21.968093 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-10 00:51:21.968101 | orchestrator | Tuesday 10 March 2026 00:48:47 +0000 (0:00:01.222) 0:02:19.146 ********* 2026-03-10 00:51:21.968109 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:51:21.968117 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:51:21.968124 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:51:21.968132 | orchestrator | 2026-03-10 00:51:21.968140 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-10 00:51:21.968148 | orchestrator | Tuesday 10 March 2026 00:48:48 +0000 (0:00:00.739) 0:02:19.885 ********* 2026-03-10 00:51:21.968156 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-10 00:51:21.968170 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-10 00:51:21.968178 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-10 00:51:21.968186 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-10 00:51:21.968193 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-10 00:51:21.968201 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-10 00:51:21.968209 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-10 00:51:21.968217 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-10 00:51:21.968225 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-10 00:51:21.968233 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-10 00:51:21.968241 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-10 00:51:21.968249 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-10 00:51:21.968256 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-10 00:51:21.968264 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-10 00:51:21.968272 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-10 00:51:21.968280 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-10 00:51:21.968288 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-10 00:51:21.968296 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-10 00:51:21.968304 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-10 00:51:21.968312 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-10 00:51:21.968320 | orchestrator | 2026-03-10 00:51:21.968328 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-10 00:51:21.968335 | orchestrator | 2026-03-10 00:51:21.968343 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-10 00:51:21.968351 | orchestrator | Tuesday 10 March 2026 00:48:51 +0000 (0:00:03.328) 0:02:23.214 ********* 2026-03-10 00:51:21.968359 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:51:21.968367 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:51:21.968375 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:51:21.968383 | orchestrator | 2026-03-10 00:51:21.968391 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-10 00:51:21.968398 | orchestrator | Tuesday 10 March 2026 00:48:52 +0000 (0:00:00.576) 0:02:23.790 ********* 2026-03-10 00:51:21.968411 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:51:21.968419 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:51:21.968427 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:51:21.968434 | orchestrator | 2026-03-10 00:51:21.968442 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-10 00:51:21.968450 | orchestrator | Tuesday 10 March 2026 00:48:52 +0000 (0:00:00.647) 0:02:24.438 ********* 2026-03-10 00:51:21.968458 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:51:21.968466 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:51:21.968474 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:51:21.968482 | orchestrator | 2026-03-10 00:51:21.968490 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-10 00:51:21.968497 | orchestrator | Tuesday 10 March 2026 00:48:53 +0000 (0:00:00.356) 0:02:24.794 ********* 2026-03-10 00:51:21.968505 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:51:21.968513 | orchestrator | 2026-03-10 00:51:21.968521 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-10 00:51:21.968529 | orchestrator | Tuesday 10 March 2026 00:48:54 +0000 (0:00:00.756) 0:02:25.551 ********* 2026-03-10 00:51:21.968537 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:51:21.968545 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:51:21.968553 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:51:21.968560 | orchestrator | 2026-03-10 00:51:21.968572 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-10 00:51:21.968580 | orchestrator | Tuesday 10 March 2026 00:48:54 +0000 (0:00:00.340) 0:02:25.892 ********* 2026-03-10 00:51:21.968588 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:51:21.968596 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:51:21.968604 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:51:21.968612 | orchestrator | 2026-03-10 00:51:21.968620 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-10 00:51:21.968627 | orchestrator | Tuesday 10 March 2026 00:48:54 +0000 (0:00:00.346) 0:02:26.239 ********* 2026-03-10 00:51:21.968635 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:51:21.968643 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:51:21.968651 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:51:21.968659 | orchestrator | 2026-03-10 00:51:21.968667 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-10 00:51:21.968675 | orchestrator | Tuesday 10 March 2026 00:48:55 +0000 (0:00:00.338) 0:02:26.577 ********* 2026-03-10 00:51:21.968683 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:51:21.968691 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:51:21.968699 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:51:21.968707 | orchestrator | 2026-03-10 00:51:21.968719 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-10 00:51:21.968728 | orchestrator | Tuesday 10 March 2026 00:48:56 +0000 (0:00:01.026) 0:02:27.604 ********* 2026-03-10 00:51:21.968735 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:51:21.968743 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:51:21.968751 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:51:21.968759 | orchestrator | 2026-03-10 00:51:21.968767 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-10 00:51:21.968775 | orchestrator | Tuesday 10 March 2026 00:48:57 +0000 (0:00:01.134) 0:02:28.738 ********* 2026-03-10 00:51:21.968783 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:51:21.968790 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:51:21.968798 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:51:21.968806 | orchestrator | 2026-03-10 00:51:21.968814 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-10 00:51:21.968822 | orchestrator | Tuesday 10 March 2026 00:48:58 +0000 (0:00:01.443) 0:02:30.182 ********* 2026-03-10 00:51:21.968830 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:51:21.968838 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:51:21.968850 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:51:21.968858 | orchestrator | 2026-03-10 00:51:21.968866 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-10 00:51:21.968874 | orchestrator | 2026-03-10 00:51:21.968881 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-10 00:51:21.968889 | orchestrator | Tuesday 10 March 2026 00:49:09 +0000 (0:00:11.011) 0:02:41.193 ********* 2026-03-10 00:51:21.968897 | orchestrator | ok: [testbed-manager] 2026-03-10 00:51:21.968905 | orchestrator | 2026-03-10 00:51:21.968913 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-10 00:51:21.968920 | orchestrator | Tuesday 10 March 2026 00:49:10 +0000 (0:00:00.961) 0:02:42.155 ********* 2026-03-10 00:51:21.968928 | orchestrator | changed: [testbed-manager] 2026-03-10 00:51:21.968936 | orchestrator | 2026-03-10 00:51:21.968944 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-10 00:51:21.968952 | orchestrator | Tuesday 10 March 2026 00:49:11 +0000 (0:00:00.491) 0:02:42.647 ********* 2026-03-10 00:51:21.968960 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-10 00:51:21.968968 | orchestrator | 2026-03-10 00:51:21.968976 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-10 00:51:21.968984 | orchestrator | Tuesday 10 March 2026 00:49:11 +0000 (0:00:00.728) 0:02:43.375 ********* 2026-03-10 00:51:21.968991 | orchestrator | changed: [testbed-manager] 2026-03-10 00:51:21.968999 | orchestrator | 2026-03-10 00:51:21.969007 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-10 00:51:21.969015 | orchestrator | Tuesday 10 March 2026 00:49:13 +0000 (0:00:01.117) 0:02:44.492 ********* 2026-03-10 00:51:21.969048 | orchestrator | changed: [testbed-manager] 2026-03-10 00:51:21.969056 | orchestrator | 2026-03-10 00:51:21.969064 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-10 00:51:21.969071 | orchestrator | Tuesday 10 March 2026 00:49:13 +0000 (0:00:00.684) 0:02:45.177 ********* 2026-03-10 00:51:21.969079 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-10 00:51:21.969087 | orchestrator | 2026-03-10 00:51:21.969095 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-10 00:51:21.969103 | orchestrator | Tuesday 10 March 2026 00:49:15 +0000 (0:00:01.617) 0:02:46.794 ********* 2026-03-10 00:51:21.969111 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-10 00:51:21.969119 | orchestrator | 2026-03-10 00:51:21.969126 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-10 00:51:21.969134 | orchestrator | Tuesday 10 March 2026 00:49:16 +0000 (0:00:00.809) 0:02:47.603 ********* 2026-03-10 00:51:21.969142 | orchestrator | changed: [testbed-manager] 2026-03-10 00:51:21.969150 | orchestrator | 2026-03-10 00:51:21.969158 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-10 00:51:21.969166 | orchestrator | Tuesday 10 March 2026 00:49:16 +0000 (0:00:00.640) 0:02:48.244 ********* 2026-03-10 00:51:21.969173 | orchestrator | changed: [testbed-manager] 2026-03-10 00:51:21.969181 | orchestrator | 2026-03-10 00:51:21.969189 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-10 00:51:21.969197 | orchestrator | 2026-03-10 00:51:21.969205 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-10 00:51:21.969213 | orchestrator | Tuesday 10 March 2026 00:49:17 +0000 (0:00:00.549) 0:02:48.794 ********* 2026-03-10 00:51:21.969220 | orchestrator | ok: [testbed-manager] 2026-03-10 00:51:21.969228 | orchestrator | 2026-03-10 00:51:21.969236 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-10 00:51:21.969244 | orchestrator | Tuesday 10 March 2026 00:49:17 +0000 (0:00:00.132) 0:02:48.927 ********* 2026-03-10 00:51:21.969252 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-10 00:51:21.969260 | orchestrator | 2026-03-10 00:51:21.969275 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-10 00:51:21.969283 | orchestrator | Tuesday 10 March 2026 00:49:17 +0000 (0:00:00.279) 0:02:49.207 ********* 2026-03-10 00:51:21.969296 | orchestrator | ok: [testbed-manager] 2026-03-10 00:51:21.969304 | orchestrator | 2026-03-10 00:51:21.969312 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-10 00:51:21.969320 | orchestrator | Tuesday 10 March 2026 00:49:18 +0000 (0:00:00.980) 0:02:50.187 ********* 2026-03-10 00:51:21.969328 | orchestrator | ok: [testbed-manager] 2026-03-10 00:51:21.969336 | orchestrator | 2026-03-10 00:51:21.969343 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-10 00:51:21.969351 | orchestrator | Tuesday 10 March 2026 00:49:20 +0000 (0:00:01.517) 0:02:51.704 ********* 2026-03-10 00:51:21.969359 | orchestrator | changed: [testbed-manager] 2026-03-10 00:51:21.969367 | orchestrator | 2026-03-10 00:51:21.969375 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-10 00:51:21.969382 | orchestrator | Tuesday 10 March 2026 00:49:21 +0000 (0:00:01.115) 0:02:52.820 ********* 2026-03-10 00:51:21.969390 | orchestrator | ok: [testbed-manager] 2026-03-10 00:51:21.969398 | orchestrator | 2026-03-10 00:51:21.969410 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-10 00:51:21.969418 | orchestrator | Tuesday 10 March 2026 00:49:21 +0000 (0:00:00.410) 0:02:53.231 ********* 2026-03-10 00:51:21.969426 | orchestrator | changed: [testbed-manager] 2026-03-10 00:51:21.969434 | orchestrator | 2026-03-10 00:51:21.969442 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-10 00:51:21.969450 | orchestrator | Tuesday 10 March 2026 00:49:31 +0000 (0:00:10.195) 0:03:03.426 ********* 2026-03-10 00:51:21.969458 | orchestrator | changed: [testbed-manager] 2026-03-10 00:51:21.969465 | orchestrator | 2026-03-10 00:51:21.969473 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-10 00:51:21.969481 | orchestrator | Tuesday 10 March 2026 00:49:50 +0000 (0:00:18.414) 0:03:21.841 ********* 2026-03-10 00:51:21.969489 | orchestrator | ok: [testbed-manager] 2026-03-10 00:51:21.969497 | orchestrator | 2026-03-10 00:51:21.969505 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-10 00:51:21.969512 | orchestrator | 2026-03-10 00:51:21.969520 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-10 00:51:21.969535 | orchestrator | Tuesday 10 March 2026 00:49:51 +0000 (0:00:00.679) 0:03:22.521 ********* 2026-03-10 00:51:21.969550 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:51:21.969563 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:51:21.969577 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:51:21.969590 | orchestrator | 2026-03-10 00:51:21.969604 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-10 00:51:21.969617 | orchestrator | Tuesday 10 March 2026 00:49:51 +0000 (0:00:00.577) 0:03:23.099 ********* 2026-03-10 00:51:21.969632 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:21.969646 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:51:21.969661 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:51:21.969675 | orchestrator | 2026-03-10 00:51:21.969687 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-10 00:51:21.969695 | orchestrator | Tuesday 10 March 2026 00:49:52 +0000 (0:00:00.375) 0:03:23.474 ********* 2026-03-10 00:51:21.969703 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:51:21.969711 | orchestrator | 2026-03-10 00:51:21.969719 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-10 00:51:21.969726 | orchestrator | Tuesday 10 March 2026 00:49:52 +0000 (0:00:00.723) 0:03:24.198 ********* 2026-03-10 00:51:21.969734 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-10 00:51:21.969742 | orchestrator | 2026-03-10 00:51:21.969750 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-10 00:51:21.969758 | orchestrator | Tuesday 10 March 2026 00:49:53 +0000 (0:00:00.848) 0:03:25.046 ********* 2026-03-10 00:51:21.969765 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-10 00:51:21.969781 | orchestrator | 2026-03-10 00:51:21.969789 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-10 00:51:21.969796 | orchestrator | Tuesday 10 March 2026 00:49:54 +0000 (0:00:00.927) 0:03:25.974 ********* 2026-03-10 00:51:21.969804 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:21.969812 | orchestrator | 2026-03-10 00:51:21.969820 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-10 00:51:21.969827 | orchestrator | Tuesday 10 March 2026 00:49:54 +0000 (0:00:00.138) 0:03:26.113 ********* 2026-03-10 00:51:21.969835 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-10 00:51:21.969843 | orchestrator | 2026-03-10 00:51:21.969851 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-10 00:51:21.969858 | orchestrator | Tuesday 10 March 2026 00:49:55 +0000 (0:00:01.176) 0:03:27.289 ********* 2026-03-10 00:51:21.969866 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:21.969874 | orchestrator | 2026-03-10 00:51:21.969882 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-10 00:51:21.969889 | orchestrator | Tuesday 10 March 2026 00:49:55 +0000 (0:00:00.138) 0:03:27.427 ********* 2026-03-10 00:51:21.969897 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:21.969905 | orchestrator | 2026-03-10 00:51:21.969913 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-10 00:51:21.969920 | orchestrator | Tuesday 10 March 2026 00:49:56 +0000 (0:00:00.144) 0:03:27.572 ********* 2026-03-10 00:51:21.969928 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:21.969936 | orchestrator | 2026-03-10 00:51:21.969943 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-10 00:51:21.969951 | orchestrator | Tuesday 10 March 2026 00:49:56 +0000 (0:00:00.143) 0:03:27.715 ********* 2026-03-10 00:51:21.969959 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:21.969967 | orchestrator | 2026-03-10 00:51:21.969974 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-10 00:51:21.969982 | orchestrator | Tuesday 10 March 2026 00:49:56 +0000 (0:00:00.123) 0:03:27.838 ********* 2026-03-10 00:51:21.969995 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-10 00:51:21.970003 | orchestrator | 2026-03-10 00:51:21.970011 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-10 00:51:21.970089 | orchestrator | Tuesday 10 March 2026 00:50:02 +0000 (0:00:05.793) 0:03:33.632 ********* 2026-03-10 00:51:21.970099 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-10 00:51:21.970107 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-03-10 00:51:21.970115 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-10 00:51:21.970123 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-10 00:51:21.970130 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-10 00:51:21.970138 | orchestrator | 2026-03-10 00:51:21.970146 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-10 00:51:21.970154 | orchestrator | Tuesday 10 March 2026 00:50:45 +0000 (0:00:43.724) 0:04:17.356 ********* 2026-03-10 00:51:21.970169 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-10 00:51:21.970177 | orchestrator | 2026-03-10 00:51:21.970185 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-10 00:51:21.970193 | orchestrator | Tuesday 10 March 2026 00:50:47 +0000 (0:00:01.410) 0:04:18.766 ********* 2026-03-10 00:51:21.970200 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-10 00:51:21.970208 | orchestrator | 2026-03-10 00:51:21.970216 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-10 00:51:21.970224 | orchestrator | Tuesday 10 March 2026 00:50:49 +0000 (0:00:01.767) 0:04:20.534 ********* 2026-03-10 00:51:21.970232 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-10 00:51:21.970240 | orchestrator | 2026-03-10 00:51:21.970254 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-10 00:51:21.970262 | orchestrator | Tuesday 10 March 2026 00:50:50 +0000 (0:00:01.216) 0:04:21.750 ********* 2026-03-10 00:51:21.970270 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:21.970278 | orchestrator | 2026-03-10 00:51:21.970285 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-10 00:51:21.970293 | orchestrator | Tuesday 10 March 2026 00:50:50 +0000 (0:00:00.352) 0:04:22.102 ********* 2026-03-10 00:51:21.970301 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-10 00:51:21.970309 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-10 00:51:21.970317 | orchestrator | 2026-03-10 00:51:21.970324 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-10 00:51:21.970332 | orchestrator | Tuesday 10 March 2026 00:50:52 +0000 (0:00:02.065) 0:04:24.168 ********* 2026-03-10 00:51:21.970340 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:21.970348 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:51:21.970355 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:51:21.970363 | orchestrator | 2026-03-10 00:51:21.970371 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-10 00:51:21.970379 | orchestrator | Tuesday 10 March 2026 00:50:53 +0000 (0:00:00.455) 0:04:24.624 ********* 2026-03-10 00:51:21.970386 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:51:21.970394 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:51:21.970402 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:51:21.970410 | orchestrator | 2026-03-10 00:51:21.970418 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-10 00:51:21.970425 | orchestrator | 2026-03-10 00:51:21.970433 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-10 00:51:21.970441 | orchestrator | Tuesday 10 March 2026 00:50:54 +0000 (0:00:00.920) 0:04:25.544 ********* 2026-03-10 00:51:21.970449 | orchestrator | ok: [testbed-manager] 2026-03-10 00:51:21.970457 | orchestrator | 2026-03-10 00:51:21.970465 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-10 00:51:21.970472 | orchestrator | Tuesday 10 March 2026 00:50:54 +0000 (0:00:00.161) 0:04:25.705 ********* 2026-03-10 00:51:21.970480 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-10 00:51:21.970488 | orchestrator | 2026-03-10 00:51:21.970496 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-10 00:51:21.970504 | orchestrator | Tuesday 10 March 2026 00:50:54 +0000 (0:00:00.273) 0:04:25.979 ********* 2026-03-10 00:51:21.970511 | orchestrator | changed: [testbed-manager] 2026-03-10 00:51:21.970519 | orchestrator | 2026-03-10 00:51:21.970527 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-10 00:51:21.970535 | orchestrator | 2026-03-10 00:51:21.970542 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-10 00:51:21.970550 | orchestrator | Tuesday 10 March 2026 00:51:00 +0000 (0:00:06.120) 0:04:32.100 ********* 2026-03-10 00:51:21.970558 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:51:21.970566 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:51:21.970573 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:51:21.970581 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:51:21.970589 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:51:21.970595 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:51:21.970602 | orchestrator | 2026-03-10 00:51:21.970608 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-10 00:51:21.970615 | orchestrator | Tuesday 10 March 2026 00:51:01 +0000 (0:00:01.221) 0:04:33.321 ********* 2026-03-10 00:51:21.970622 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-10 00:51:21.970628 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-10 00:51:21.970635 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-10 00:51:21.970645 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-10 00:51:21.970656 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-10 00:51:21.970663 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-10 00:51:21.970670 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-10 00:51:21.970676 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-10 00:51:21.970683 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-10 00:51:21.970689 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-10 00:51:21.970696 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-10 00:51:21.970702 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-10 00:51:21.970713 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-10 00:51:21.970720 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-10 00:51:21.970727 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-10 00:51:21.970733 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-10 00:51:21.970740 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-10 00:51:21.970747 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-10 00:51:21.970753 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-10 00:51:21.970760 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-10 00:51:21.970766 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-10 00:51:21.970773 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-10 00:51:21.970779 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-10 00:51:21.970786 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-10 00:51:21.970793 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-10 00:51:21.970799 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-10 00:51:21.970806 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-10 00:51:21.970812 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-10 00:51:21.970819 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-10 00:51:21.970825 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-10 00:51:21.970832 | orchestrator | 2026-03-10 00:51:21.970839 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-10 00:51:21.970845 | orchestrator | Tuesday 10 March 2026 00:51:19 +0000 (0:00:17.727) 0:04:51.049 ********* 2026-03-10 00:51:21.970852 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:51:21.970858 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:51:21.970865 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:51:21.970872 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:51:21.970879 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:51:21.970885 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:21.970892 | orchestrator | 2026-03-10 00:51:21.970898 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-10 00:51:21.970910 | orchestrator | Tuesday 10 March 2026 00:51:20 +0000 (0:00:01.029) 0:04:52.078 ********* 2026-03-10 00:51:21.970917 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:51:21.970923 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:51:21.970930 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:51:21.970937 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:21.970943 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:51:21.970950 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:51:21.970956 | orchestrator | 2026-03-10 00:51:21.970963 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:51:21.970970 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:51:21.970977 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-10 00:51:21.970984 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-10 00:51:21.970991 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-10 00:51:21.970998 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-10 00:51:21.971008 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-10 00:51:21.971015 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-10 00:51:21.971039 | orchestrator | 2026-03-10 00:51:21.971046 | orchestrator | 2026-03-10 00:51:21.971053 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:51:21.971060 | orchestrator | Tuesday 10 March 2026 00:51:21 +0000 (0:00:00.560) 0:04:52.639 ********* 2026-03-10 00:51:21.971067 | orchestrator | =============================================================================== 2026-03-10 00:51:21.971073 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.87s 2026-03-10 00:51:21.971080 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 43.72s 2026-03-10 00:51:21.971087 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 22.71s 2026-03-10 00:51:21.971097 | orchestrator | kubectl : Install required packages ------------------------------------ 18.41s 2026-03-10 00:51:21.971104 | orchestrator | Manage labels ---------------------------------------------------------- 17.73s 2026-03-10 00:51:21.971111 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 11.01s 2026-03-10 00:51:21.971118 | orchestrator | kubectl : Add repository Debian ---------------------------------------- 10.20s 2026-03-10 00:51:21.971124 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.18s 2026-03-10 00:51:21.971131 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.12s 2026-03-10 00:51:21.971138 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.79s 2026-03-10 00:51:21.971144 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 3.56s 2026-03-10 00:51:21.971151 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 3.42s 2026-03-10 00:51:21.971158 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.33s 2026-03-10 00:51:21.971164 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.86s 2026-03-10 00:51:21.971171 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.79s 2026-03-10 00:51:21.971183 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.61s 2026-03-10 00:51:21.971189 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.34s 2026-03-10 00:51:21.971196 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 2.09s 2026-03-10 00:51:21.971203 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.08s 2026-03-10 00:51:21.971209 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.07s 2026-03-10 00:51:21.971216 | orchestrator | 2026-03-10 00:51:21 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:25.066603 | orchestrator | 2026-03-10 00:51:25 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:51:25.066687 | orchestrator | 2026-03-10 00:51:25 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:51:25.068283 | orchestrator | 2026-03-10 00:51:25 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:51:25.070635 | orchestrator | 2026-03-10 00:51:25 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:51:25.070929 | orchestrator | 2026-03-10 00:51:25 | INFO  | Task 47874a6c-887d-4b1e-b15c-5281f63e6780 is in state STARTED 2026-03-10 00:51:25.074393 | orchestrator | 2026-03-10 00:51:25 | INFO  | Task 22cc6fff-eb5c-4b4c-8801-badf01e15eaf is in state STARTED 2026-03-10 00:51:25.074461 | orchestrator | 2026-03-10 00:51:25 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:28.132736 | orchestrator | 2026-03-10 00:51:28 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:51:28.135754 | orchestrator | 2026-03-10 00:51:28 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:51:28.138381 | orchestrator | 2026-03-10 00:51:28 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:51:28.141989 | orchestrator | 2026-03-10 00:51:28 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:51:28.143736 | orchestrator | 2026-03-10 00:51:28 | INFO  | Task 47874a6c-887d-4b1e-b15c-5281f63e6780 is in state STARTED 2026-03-10 00:51:28.146305 | orchestrator | 2026-03-10 00:51:28 | INFO  | Task 22cc6fff-eb5c-4b4c-8801-badf01e15eaf is in state STARTED 2026-03-10 00:51:28.146472 | orchestrator | 2026-03-10 00:51:28 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:31.187005 | orchestrator | 2026-03-10 00:51:31 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:51:31.187184 | orchestrator | 2026-03-10 00:51:31 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:51:31.188167 | orchestrator | 2026-03-10 00:51:31 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:51:31.189217 | orchestrator | 2026-03-10 00:51:31 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:51:31.190448 | orchestrator | 2026-03-10 00:51:31 | INFO  | Task 47874a6c-887d-4b1e-b15c-5281f63e6780 is in state STARTED 2026-03-10 00:51:31.192437 | orchestrator | 2026-03-10 00:51:31 | INFO  | Task 22cc6fff-eb5c-4b4c-8801-badf01e15eaf is in state STARTED 2026-03-10 00:51:31.192498 | orchestrator | 2026-03-10 00:51:31 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:34.231192 | orchestrator | 2026-03-10 00:51:34 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:51:34.231277 | orchestrator | 2026-03-10 00:51:34 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:51:34.233954 | orchestrator | 2026-03-10 00:51:34 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:51:34.235912 | orchestrator | 2026-03-10 00:51:34 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:51:34.235968 | orchestrator | 2026-03-10 00:51:34 | INFO  | Task 47874a6c-887d-4b1e-b15c-5281f63e6780 is in state SUCCESS 2026-03-10 00:51:34.243088 | orchestrator | 2026-03-10 00:51:34 | INFO  | Task 22cc6fff-eb5c-4b4c-8801-badf01e15eaf is in state STARTED 2026-03-10 00:51:34.243179 | orchestrator | 2026-03-10 00:51:34 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:37.326810 | orchestrator | 2026-03-10 00:51:37 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:51:37.328672 | orchestrator | 2026-03-10 00:51:37 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:51:37.329724 | orchestrator | 2026-03-10 00:51:37 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:51:37.332031 | orchestrator | 2026-03-10 00:51:37 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:51:37.332686 | orchestrator | 2026-03-10 00:51:37 | INFO  | Task 22cc6fff-eb5c-4b4c-8801-badf01e15eaf is in state SUCCESS 2026-03-10 00:51:37.332728 | orchestrator | 2026-03-10 00:51:37 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:40.374661 | orchestrator | 2026-03-10 00:51:40 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:51:40.376831 | orchestrator | 2026-03-10 00:51:40 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:51:40.376882 | orchestrator | 2026-03-10 00:51:40 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:51:40.377076 | orchestrator | 2026-03-10 00:51:40 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:51:40.377168 | orchestrator | 2026-03-10 00:51:40 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:43.407109 | orchestrator | 2026-03-10 00:51:43 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:51:43.407203 | orchestrator | 2026-03-10 00:51:43 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:51:43.408021 | orchestrator | 2026-03-10 00:51:43 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:51:43.408872 | orchestrator | 2026-03-10 00:51:43 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:51:43.408903 | orchestrator | 2026-03-10 00:51:43 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:46.450644 | orchestrator | 2026-03-10 00:51:46 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:51:46.451521 | orchestrator | 2026-03-10 00:51:46 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:51:46.452528 | orchestrator | 2026-03-10 00:51:46 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:51:46.453843 | orchestrator | 2026-03-10 00:51:46 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:51:46.453886 | orchestrator | 2026-03-10 00:51:46 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:49.495349 | orchestrator | 2026-03-10 00:51:49 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:51:49.495649 | orchestrator | 2026-03-10 00:51:49 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:51:49.496823 | orchestrator | 2026-03-10 00:51:49 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:51:49.497873 | orchestrator | 2026-03-10 00:51:49 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:51:49.498201 | orchestrator | 2026-03-10 00:51:49 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:52.542255 | orchestrator | 2026-03-10 00:51:52 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:51:52.543572 | orchestrator | 2026-03-10 00:51:52 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:51:52.544450 | orchestrator | 2026-03-10 00:51:52 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:51:52.545497 | orchestrator | 2026-03-10 00:51:52 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:51:52.545519 | orchestrator | 2026-03-10 00:51:52 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:55.588411 | orchestrator | 2026-03-10 00:51:55 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state STARTED 2026-03-10 00:51:55.589357 | orchestrator | 2026-03-10 00:51:55 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:51:55.590172 | orchestrator | 2026-03-10 00:51:55 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:51:55.591266 | orchestrator | 2026-03-10 00:51:55 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:51:55.594669 | orchestrator | 2026-03-10 00:51:55 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:58.645672 | orchestrator | 2026-03-10 00:51:58 | INFO  | Task d5f4627a-52ea-495c-89fb-3a2096b89b7a is in state SUCCESS 2026-03-10 00:51:58.646532 | orchestrator | 2026-03-10 00:51:58.646579 | orchestrator | 2026-03-10 00:51:58.646596 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-10 00:51:58.646612 | orchestrator | 2026-03-10 00:51:58.646625 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-10 00:51:58.646639 | orchestrator | Tuesday 10 March 2026 00:51:28 +0000 (0:00:00.232) 0:00:00.232 ********* 2026-03-10 00:51:58.646654 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-10 00:51:58.646667 | orchestrator | 2026-03-10 00:51:58.646680 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-10 00:51:58.646693 | orchestrator | Tuesday 10 March 2026 00:51:29 +0000 (0:00:00.926) 0:00:01.158 ********* 2026-03-10 00:51:58.646706 | orchestrator | changed: [testbed-manager] 2026-03-10 00:51:58.646719 | orchestrator | 2026-03-10 00:51:58.646731 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-10 00:51:58.646744 | orchestrator | Tuesday 10 March 2026 00:51:30 +0000 (0:00:01.656) 0:00:02.815 ********* 2026-03-10 00:51:58.646759 | orchestrator | changed: [testbed-manager] 2026-03-10 00:51:58.646772 | orchestrator | 2026-03-10 00:51:58.646786 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:51:58.646800 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:51:58.646816 | orchestrator | 2026-03-10 00:51:58.646829 | orchestrator | 2026-03-10 00:51:58.646843 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:51:58.646857 | orchestrator | Tuesday 10 March 2026 00:51:31 +0000 (0:00:00.600) 0:00:03.415 ********* 2026-03-10 00:51:58.646871 | orchestrator | =============================================================================== 2026-03-10 00:51:58.646886 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.66s 2026-03-10 00:51:58.646899 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.93s 2026-03-10 00:51:58.646912 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.60s 2026-03-10 00:51:58.646954 | orchestrator | 2026-03-10 00:51:58.646966 | orchestrator | 2026-03-10 00:51:58.647022 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-10 00:51:58.647032 | orchestrator | 2026-03-10 00:51:58.647040 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-10 00:51:58.647048 | orchestrator | Tuesday 10 March 2026 00:51:27 +0000 (0:00:00.187) 0:00:00.187 ********* 2026-03-10 00:51:58.647056 | orchestrator | ok: [testbed-manager] 2026-03-10 00:51:58.647065 | orchestrator | 2026-03-10 00:51:58.647073 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-10 00:51:58.647081 | orchestrator | Tuesday 10 March 2026 00:51:28 +0000 (0:00:00.846) 0:00:01.033 ********* 2026-03-10 00:51:58.647088 | orchestrator | ok: [testbed-manager] 2026-03-10 00:51:58.647096 | orchestrator | 2026-03-10 00:51:58.647104 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-10 00:51:58.647112 | orchestrator | Tuesday 10 March 2026 00:51:29 +0000 (0:00:00.733) 0:00:01.767 ********* 2026-03-10 00:51:58.647120 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-10 00:51:58.647128 | orchestrator | 2026-03-10 00:51:58.647136 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-10 00:51:58.647144 | orchestrator | Tuesday 10 March 2026 00:51:30 +0000 (0:00:00.813) 0:00:02.580 ********* 2026-03-10 00:51:58.647151 | orchestrator | changed: [testbed-manager] 2026-03-10 00:51:58.647159 | orchestrator | 2026-03-10 00:51:58.647167 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-10 00:51:58.647175 | orchestrator | Tuesday 10 March 2026 00:51:32 +0000 (0:00:02.185) 0:00:04.765 ********* 2026-03-10 00:51:58.647183 | orchestrator | changed: [testbed-manager] 2026-03-10 00:51:58.647191 | orchestrator | 2026-03-10 00:51:58.647198 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-10 00:51:58.647206 | orchestrator | Tuesday 10 March 2026 00:51:32 +0000 (0:00:00.631) 0:00:05.396 ********* 2026-03-10 00:51:58.647214 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-10 00:51:58.647222 | orchestrator | 2026-03-10 00:51:58.647230 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-10 00:51:58.647238 | orchestrator | Tuesday 10 March 2026 00:51:34 +0000 (0:00:01.756) 0:00:07.153 ********* 2026-03-10 00:51:58.647246 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-10 00:51:58.647254 | orchestrator | 2026-03-10 00:51:58.647262 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-10 00:51:58.647270 | orchestrator | Tuesday 10 March 2026 00:51:35 +0000 (0:00:01.040) 0:00:08.193 ********* 2026-03-10 00:51:58.647277 | orchestrator | ok: [testbed-manager] 2026-03-10 00:51:58.647285 | orchestrator | 2026-03-10 00:51:58.647293 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-10 00:51:58.647301 | orchestrator | Tuesday 10 March 2026 00:51:36 +0000 (0:00:00.533) 0:00:08.727 ********* 2026-03-10 00:51:58.647309 | orchestrator | ok: [testbed-manager] 2026-03-10 00:51:58.647316 | orchestrator | 2026-03-10 00:51:58.647324 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:51:58.647332 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:51:58.647340 | orchestrator | 2026-03-10 00:51:58.647348 | orchestrator | 2026-03-10 00:51:58.647356 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:51:58.647364 | orchestrator | Tuesday 10 March 2026 00:51:36 +0000 (0:00:00.370) 0:00:09.098 ********* 2026-03-10 00:51:58.647371 | orchestrator | =============================================================================== 2026-03-10 00:51:58.647379 | orchestrator | Write kubeconfig file --------------------------------------------------- 2.19s 2026-03-10 00:51:58.647387 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.76s 2026-03-10 00:51:58.647395 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 1.04s 2026-03-10 00:51:58.647423 | orchestrator | Get home directory of operator user ------------------------------------- 0.85s 2026-03-10 00:51:58.647432 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.81s 2026-03-10 00:51:58.647440 | orchestrator | Create .kube directory -------------------------------------------------- 0.73s 2026-03-10 00:51:58.647448 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.63s 2026-03-10 00:51:58.647455 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.53s 2026-03-10 00:51:58.647463 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.37s 2026-03-10 00:51:58.647471 | orchestrator | 2026-03-10 00:51:58.647479 | orchestrator | 2026-03-10 00:51:58.647487 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-03-10 00:51:58.647495 | orchestrator | 2026-03-10 00:51:58.647502 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-10 00:51:58.647515 | orchestrator | Tuesday 10 March 2026 00:49:24 +0000 (0:00:00.427) 0:00:00.427 ********* 2026-03-10 00:51:58.647527 | orchestrator | ok: [localhost] => { 2026-03-10 00:51:58.647539 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-03-10 00:51:58.647550 | orchestrator | } 2026-03-10 00:51:58.647561 | orchestrator | 2026-03-10 00:51:58.647572 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-03-10 00:51:58.647592 | orchestrator | Tuesday 10 March 2026 00:49:24 +0000 (0:00:00.082) 0:00:00.510 ********* 2026-03-10 00:51:58.647606 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-03-10 00:51:58.647620 | orchestrator | ...ignoring 2026-03-10 00:51:58.647633 | orchestrator | 2026-03-10 00:51:58.647646 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-03-10 00:51:58.647659 | orchestrator | Tuesday 10 March 2026 00:49:28 +0000 (0:00:03.963) 0:00:04.476 ********* 2026-03-10 00:51:58.647670 | orchestrator | skipping: [localhost] 2026-03-10 00:51:58.647681 | orchestrator | 2026-03-10 00:51:58.647693 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-03-10 00:51:58.647704 | orchestrator | Tuesday 10 March 2026 00:49:28 +0000 (0:00:00.221) 0:00:04.698 ********* 2026-03-10 00:51:58.647809 | orchestrator | ok: [localhost] 2026-03-10 00:51:58.648071 | orchestrator | 2026-03-10 00:51:58.648081 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 00:51:58.648089 | orchestrator | 2026-03-10 00:51:58.648097 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 00:51:58.648105 | orchestrator | Tuesday 10 March 2026 00:49:29 +0000 (0:00:00.677) 0:00:05.375 ********* 2026-03-10 00:51:58.648113 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:51:58.648121 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:51:58.648129 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:51:58.648137 | orchestrator | 2026-03-10 00:51:58.648145 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 00:51:58.648153 | orchestrator | Tuesday 10 March 2026 00:49:30 +0000 (0:00:01.585) 0:00:06.960 ********* 2026-03-10 00:51:58.648161 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-10 00:51:58.648169 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-10 00:51:58.648177 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-10 00:51:58.648185 | orchestrator | 2026-03-10 00:51:58.648193 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-10 00:51:58.648200 | orchestrator | 2026-03-10 00:51:58.648208 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-10 00:51:58.648223 | orchestrator | Tuesday 10 March 2026 00:49:31 +0000 (0:00:01.008) 0:00:07.969 ********* 2026-03-10 00:51:58.648232 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:51:58.648252 | orchestrator | 2026-03-10 00:51:58.648260 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-10 00:51:58.648281 | orchestrator | Tuesday 10 March 2026 00:49:32 +0000 (0:00:00.559) 0:00:08.529 ********* 2026-03-10 00:51:58.648290 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:51:58.648298 | orchestrator | 2026-03-10 00:51:58.648305 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-10 00:51:58.648313 | orchestrator | Tuesday 10 March 2026 00:49:33 +0000 (0:00:01.388) 0:00:09.917 ********* 2026-03-10 00:51:58.648321 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:58.648329 | orchestrator | 2026-03-10 00:51:58.648337 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-10 00:51:58.648345 | orchestrator | Tuesday 10 March 2026 00:49:34 +0000 (0:00:00.581) 0:00:10.499 ********* 2026-03-10 00:51:58.648353 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:58.648361 | orchestrator | 2026-03-10 00:51:58.648369 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-10 00:51:58.648377 | orchestrator | Tuesday 10 March 2026 00:49:34 +0000 (0:00:00.545) 0:00:11.044 ********* 2026-03-10 00:51:58.648385 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:58.648393 | orchestrator | 2026-03-10 00:51:58.648415 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-10 00:51:58.648424 | orchestrator | Tuesday 10 March 2026 00:49:35 +0000 (0:00:00.530) 0:00:11.574 ********* 2026-03-10 00:51:58.648431 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:58.648439 | orchestrator | 2026-03-10 00:51:58.648447 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-10 00:51:58.648455 | orchestrator | Tuesday 10 March 2026 00:49:36 +0000 (0:00:01.170) 0:00:12.745 ********* 2026-03-10 00:51:58.648463 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:51:58.648471 | orchestrator | 2026-03-10 00:51:58.648479 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-10 00:51:58.648516 | orchestrator | Tuesday 10 March 2026 00:49:37 +0000 (0:00:00.898) 0:00:13.643 ********* 2026-03-10 00:51:58.648525 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:51:58.648533 | orchestrator | 2026-03-10 00:51:58.648541 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-10 00:51:58.648549 | orchestrator | Tuesday 10 March 2026 00:49:38 +0000 (0:00:01.244) 0:00:14.887 ********* 2026-03-10 00:51:58.648556 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:58.648564 | orchestrator | 2026-03-10 00:51:58.648572 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-10 00:51:58.648580 | orchestrator | Tuesday 10 March 2026 00:49:39 +0000 (0:00:00.495) 0:00:15.383 ********* 2026-03-10 00:51:58.648588 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:58.648596 | orchestrator | 2026-03-10 00:51:58.648603 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-10 00:51:58.648611 | orchestrator | Tuesday 10 March 2026 00:49:40 +0000 (0:00:01.083) 0:00:16.467 ********* 2026-03-10 00:51:58.648625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-10 00:51:58.648647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-10 00:51:58.648657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-10 00:51:58.648666 | orchestrator | 2026-03-10 00:51:58.648675 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-10 00:51:58.648684 | orchestrator | Tuesday 10 March 2026 00:49:42 +0000 (0:00:02.458) 0:00:18.925 ********* 2026-03-10 00:51:58.648700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-10 00:51:58.648712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-10 00:51:58.648733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-10 00:51:58.648743 | orchestrator | 2026-03-10 00:51:58.648755 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-10 00:51:58.648769 | orchestrator | Tuesday 10 March 2026 00:49:45 +0000 (0:00:02.599) 0:00:21.524 ********* 2026-03-10 00:51:58.648783 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-10 00:51:58.648796 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-10 00:51:58.648809 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-10 00:51:58.648821 | orchestrator | 2026-03-10 00:51:58.648834 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-10 00:51:58.648849 | orchestrator | Tuesday 10 March 2026 00:49:47 +0000 (0:00:01.878) 0:00:23.403 ********* 2026-03-10 00:51:58.648863 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-10 00:51:58.648876 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-10 00:51:58.648890 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-10 00:51:58.648903 | orchestrator | 2026-03-10 00:51:58.648924 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-10 00:51:58.648939 | orchestrator | Tuesday 10 March 2026 00:49:49 +0000 (0:00:02.403) 0:00:25.806 ********* 2026-03-10 00:51:58.648954 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-10 00:51:58.648967 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-10 00:51:58.649016 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-10 00:51:58.649026 | orchestrator | 2026-03-10 00:51:58.649038 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-10 00:51:58.649052 | orchestrator | Tuesday 10 March 2026 00:49:51 +0000 (0:00:01.989) 0:00:27.795 ********* 2026-03-10 00:51:58.649064 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-10 00:51:58.649086 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-10 00:51:58.649099 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-10 00:51:58.649113 | orchestrator | 2026-03-10 00:51:58.649126 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-10 00:51:58.649139 | orchestrator | Tuesday 10 March 2026 00:49:54 +0000 (0:00:02.440) 0:00:30.236 ********* 2026-03-10 00:51:58.649152 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-10 00:51:58.649284 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-10 00:51:58.649305 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-10 00:51:58.649317 | orchestrator | 2026-03-10 00:51:58.649328 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-10 00:51:58.649341 | orchestrator | Tuesday 10 March 2026 00:49:56 +0000 (0:00:02.573) 0:00:32.810 ********* 2026-03-10 00:51:58.649354 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-10 00:51:58.649368 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-10 00:51:58.649381 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-10 00:51:58.649394 | orchestrator | 2026-03-10 00:51:58.649407 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-10 00:51:58.649421 | orchestrator | Tuesday 10 March 2026 00:49:58 +0000 (0:00:02.022) 0:00:34.832 ********* 2026-03-10 00:51:58.649435 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:58.649450 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:51:58.649463 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:51:58.649478 | orchestrator | 2026-03-10 00:51:58.649487 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-03-10 00:51:58.649495 | orchestrator | Tuesday 10 March 2026 00:49:59 +0000 (0:00:00.631) 0:00:35.464 ********* 2026-03-10 00:51:58.649516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-10 00:51:58.649546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-10 00:51:58.649575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-10 00:51:58.649590 | orchestrator | 2026-03-10 00:51:58.649603 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-10 00:51:58.649615 | orchestrator | Tuesday 10 March 2026 00:50:01 +0000 (0:00:02.169) 0:00:37.633 ********* 2026-03-10 00:51:58.649627 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:51:58.649641 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:51:58.649655 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:51:58.649668 | orchestrator | 2026-03-10 00:51:58.649681 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-10 00:51:58.649695 | orchestrator | Tuesday 10 March 2026 00:50:02 +0000 (0:00:01.005) 0:00:38.638 ********* 2026-03-10 00:51:58.649707 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:51:58.649718 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:51:58.649726 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:51:58.649734 | orchestrator | 2026-03-10 00:51:58.649742 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-10 00:51:58.649750 | orchestrator | Tuesday 10 March 2026 00:50:10 +0000 (0:00:07.629) 0:00:46.268 ********* 2026-03-10 00:51:58.649758 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:51:58.649765 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:51:58.649773 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:51:58.649781 | orchestrator | 2026-03-10 00:51:58.649789 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-10 00:51:58.649796 | orchestrator | 2026-03-10 00:51:58.649804 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-10 00:51:58.649812 | orchestrator | Tuesday 10 March 2026 00:50:10 +0000 (0:00:00.441) 0:00:46.709 ********* 2026-03-10 00:51:58.649820 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:51:58.649828 | orchestrator | 2026-03-10 00:51:58.649842 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-10 00:51:58.649850 | orchestrator | Tuesday 10 March 2026 00:50:11 +0000 (0:00:00.535) 0:00:47.245 ********* 2026-03-10 00:51:58.649858 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:58.649866 | orchestrator | 2026-03-10 00:51:58.649874 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-10 00:51:58.649881 | orchestrator | Tuesday 10 March 2026 00:50:11 +0000 (0:00:00.236) 0:00:47.481 ********* 2026-03-10 00:51:58.649889 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:51:58.649897 | orchestrator | 2026-03-10 00:51:58.649905 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-10 00:51:58.649913 | orchestrator | Tuesday 10 March 2026 00:50:18 +0000 (0:00:07.127) 0:00:54.608 ********* 2026-03-10 00:51:58.649929 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:51:58.649938 | orchestrator | 2026-03-10 00:51:58.649947 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-10 00:51:58.649956 | orchestrator | 2026-03-10 00:51:58.649965 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-10 00:51:58.650110 | orchestrator | Tuesday 10 March 2026 00:51:09 +0000 (0:00:51.171) 0:01:45.779 ********* 2026-03-10 00:51:58.650135 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:51:58.650154 | orchestrator | 2026-03-10 00:51:58.650169 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-10 00:51:58.650182 | orchestrator | Tuesday 10 March 2026 00:51:10 +0000 (0:00:00.711) 0:01:46.491 ********* 2026-03-10 00:51:58.650194 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:51:58.650203 | orchestrator | 2026-03-10 00:51:58.650211 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-10 00:51:58.650218 | orchestrator | Tuesday 10 March 2026 00:51:10 +0000 (0:00:00.271) 0:01:46.763 ********* 2026-03-10 00:51:58.650226 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:51:58.650234 | orchestrator | 2026-03-10 00:51:58.650242 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-10 00:51:58.650249 | orchestrator | Tuesday 10 March 2026 00:51:12 +0000 (0:00:02.004) 0:01:48.771 ********* 2026-03-10 00:51:58.650257 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:51:58.650265 | orchestrator | 2026-03-10 00:51:58.650273 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-10 00:51:58.650281 | orchestrator | 2026-03-10 00:51:58.650289 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-10 00:51:58.650306 | orchestrator | Tuesday 10 March 2026 00:51:30 +0000 (0:00:17.797) 0:02:06.569 ********* 2026-03-10 00:51:58.650314 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:51:58.650322 | orchestrator | 2026-03-10 00:51:58.650330 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-10 00:51:58.650338 | orchestrator | Tuesday 10 March 2026 00:51:31 +0000 (0:00:00.837) 0:02:07.406 ********* 2026-03-10 00:51:58.650345 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:51:58.650353 | orchestrator | 2026-03-10 00:51:58.650361 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-10 00:51:58.650369 | orchestrator | Tuesday 10 March 2026 00:51:31 +0000 (0:00:00.241) 0:02:07.648 ********* 2026-03-10 00:51:58.650376 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:51:58.650384 | orchestrator | 2026-03-10 00:51:58.650392 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-10 00:51:58.650400 | orchestrator | Tuesday 10 March 2026 00:51:38 +0000 (0:00:07.020) 0:02:14.669 ********* 2026-03-10 00:51:58.650408 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:51:58.650415 | orchestrator | 2026-03-10 00:51:58.650423 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-10 00:51:58.650431 | orchestrator | 2026-03-10 00:51:58.650439 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-10 00:51:58.650446 | orchestrator | Tuesday 10 March 2026 00:51:51 +0000 (0:00:12.558) 0:02:27.227 ********* 2026-03-10 00:51:58.650454 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:51:58.650462 | orchestrator | 2026-03-10 00:51:58.650470 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-10 00:51:58.650478 | orchestrator | Tuesday 10 March 2026 00:51:51 +0000 (0:00:00.771) 0:02:27.999 ********* 2026-03-10 00:51:58.650485 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-10 00:51:58.650596 | orchestrator | enable_outward_rabbitmq_True 2026-03-10 00:51:58.650608 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-10 00:51:58.650616 | orchestrator | outward_rabbitmq_restart 2026-03-10 00:51:58.650623 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:51:58.650631 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:51:58.650649 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:51:58.650657 | orchestrator | 2026-03-10 00:51:58.650665 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-03-10 00:51:58.650673 | orchestrator | skipping: no hosts matched 2026-03-10 00:51:58.650680 | orchestrator | 2026-03-10 00:51:58.650688 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-03-10 00:51:58.650696 | orchestrator | skipping: no hosts matched 2026-03-10 00:51:58.650704 | orchestrator | 2026-03-10 00:51:58.650711 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-03-10 00:51:58.650719 | orchestrator | skipping: no hosts matched 2026-03-10 00:51:58.650727 | orchestrator | 2026-03-10 00:51:58.650735 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:51:58.650744 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-10 00:51:58.650753 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-10 00:51:58.650785 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:51:58.650800 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:51:58.650808 | orchestrator | 2026-03-10 00:51:58.650816 | orchestrator | 2026-03-10 00:51:58.650823 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:51:58.650831 | orchestrator | Tuesday 10 March 2026 00:51:55 +0000 (0:00:03.252) 0:02:31.252 ********* 2026-03-10 00:51:58.650839 | orchestrator | =============================================================================== 2026-03-10 00:51:58.650847 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 81.52s 2026-03-10 00:51:58.650855 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 16.16s 2026-03-10 00:51:58.650863 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.63s 2026-03-10 00:51:58.650871 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.97s 2026-03-10 00:51:58.650878 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.25s 2026-03-10 00:51:58.650886 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.60s 2026-03-10 00:51:58.650894 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.57s 2026-03-10 00:51:58.650902 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 2.46s 2026-03-10 00:51:58.650924 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.44s 2026-03-10 00:51:58.650941 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.40s 2026-03-10 00:51:58.650949 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.17s 2026-03-10 00:51:58.650957 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.09s 2026-03-10 00:51:58.650965 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.02s 2026-03-10 00:51:58.650991 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.99s 2026-03-10 00:51:58.651003 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.88s 2026-03-10 00:51:58.651018 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.59s 2026-03-10 00:51:58.651026 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.39s 2026-03-10 00:51:58.651034 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.24s 2026-03-10 00:51:58.651042 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 1.17s 2026-03-10 00:51:58.651049 | orchestrator | rabbitmq : Remove ha-all policy from RabbitMQ --------------------------- 1.08s 2026-03-10 00:51:58.651073 | orchestrator | 2026-03-10 00:51:58 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:51:58.651081 | orchestrator | 2026-03-10 00:51:58 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:51:58.651089 | orchestrator | 2026-03-10 00:51:58 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:51:58.651097 | orchestrator | 2026-03-10 00:51:58 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:01.694592 | orchestrator | 2026-03-10 00:52:01 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:52:01.696440 | orchestrator | 2026-03-10 00:52:01 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:52:01.697929 | orchestrator | 2026-03-10 00:52:01 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:52:01.698074 | orchestrator | 2026-03-10 00:52:01 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:04.753542 | orchestrator | 2026-03-10 00:52:04 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:52:04.757813 | orchestrator | 2026-03-10 00:52:04 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:52:04.758283 | orchestrator | 2026-03-10 00:52:04 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:52:04.758468 | orchestrator | 2026-03-10 00:52:04 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:07.802739 | orchestrator | 2026-03-10 00:52:07 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:52:07.802868 | orchestrator | 2026-03-10 00:52:07 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:52:07.803550 | orchestrator | 2026-03-10 00:52:07 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:52:07.803580 | orchestrator | 2026-03-10 00:52:07 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:10.845767 | orchestrator | 2026-03-10 00:52:10 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:52:10.846175 | orchestrator | 2026-03-10 00:52:10 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:52:10.846897 | orchestrator | 2026-03-10 00:52:10 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:52:10.846944 | orchestrator | 2026-03-10 00:52:10 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:13.883108 | orchestrator | 2026-03-10 00:52:13 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:52:13.884658 | orchestrator | 2026-03-10 00:52:13 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:52:13.886636 | orchestrator | 2026-03-10 00:52:13 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:52:13.886723 | orchestrator | 2026-03-10 00:52:13 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:16.934298 | orchestrator | 2026-03-10 00:52:16 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:52:16.934598 | orchestrator | 2026-03-10 00:52:16 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:52:16.936238 | orchestrator | 2026-03-10 00:52:16 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:52:16.936336 | orchestrator | 2026-03-10 00:52:16 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:19.985785 | orchestrator | 2026-03-10 00:52:19 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:52:19.985911 | orchestrator | 2026-03-10 00:52:19 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:52:19.986176 | orchestrator | 2026-03-10 00:52:19 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:52:19.986205 | orchestrator | 2026-03-10 00:52:19 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:23.044118 | orchestrator | 2026-03-10 00:52:23 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:52:23.044204 | orchestrator | 2026-03-10 00:52:23 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:52:23.046391 | orchestrator | 2026-03-10 00:52:23 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:52:23.046425 | orchestrator | 2026-03-10 00:52:23 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:26.088253 | orchestrator | 2026-03-10 00:52:26 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:52:26.089420 | orchestrator | 2026-03-10 00:52:26 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:52:26.090768 | orchestrator | 2026-03-10 00:52:26 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:52:26.090823 | orchestrator | 2026-03-10 00:52:26 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:29.139165 | orchestrator | 2026-03-10 00:52:29 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:52:29.141626 | orchestrator | 2026-03-10 00:52:29 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:52:29.143225 | orchestrator | 2026-03-10 00:52:29 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:52:29.143286 | orchestrator | 2026-03-10 00:52:29 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:32.195722 | orchestrator | 2026-03-10 00:52:32 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:52:32.195856 | orchestrator | 2026-03-10 00:52:32 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:52:32.196480 | orchestrator | 2026-03-10 00:52:32 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:52:32.196527 | orchestrator | 2026-03-10 00:52:32 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:35.234500 | orchestrator | 2026-03-10 00:52:35 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:52:35.236485 | orchestrator | 2026-03-10 00:52:35 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:52:35.242215 | orchestrator | 2026-03-10 00:52:35 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:52:35.242299 | orchestrator | 2026-03-10 00:52:35 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:38.289728 | orchestrator | 2026-03-10 00:52:38 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:52:38.291448 | orchestrator | 2026-03-10 00:52:38 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:52:38.293190 | orchestrator | 2026-03-10 00:52:38 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:52:38.293256 | orchestrator | 2026-03-10 00:52:38 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:41.341767 | orchestrator | 2026-03-10 00:52:41 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:52:41.342550 | orchestrator | 2026-03-10 00:52:41 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:52:41.343511 | orchestrator | 2026-03-10 00:52:41 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:52:41.343537 | orchestrator | 2026-03-10 00:52:41 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:44.390572 | orchestrator | 2026-03-10 00:52:44 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:52:44.390743 | orchestrator | 2026-03-10 00:52:44 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:52:44.392020 | orchestrator | 2026-03-10 00:52:44 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:52:44.392086 | orchestrator | 2026-03-10 00:52:44 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:47.442132 | orchestrator | 2026-03-10 00:52:47 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:52:47.442786 | orchestrator | 2026-03-10 00:52:47 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:52:47.446043 | orchestrator | 2026-03-10 00:52:47 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:52:47.446098 | orchestrator | 2026-03-10 00:52:47 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:50.486306 | orchestrator | 2026-03-10 00:52:50 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:52:50.487297 | orchestrator | 2026-03-10 00:52:50 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:52:50.488931 | orchestrator | 2026-03-10 00:52:50 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:52:50.489170 | orchestrator | 2026-03-10 00:52:50 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:53.538112 | orchestrator | 2026-03-10 00:52:53 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:52:53.542474 | orchestrator | 2026-03-10 00:52:53 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state STARTED 2026-03-10 00:52:53.545704 | orchestrator | 2026-03-10 00:52:53 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:52:53.545973 | orchestrator | 2026-03-10 00:52:53 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:56.591182 | orchestrator | 2026-03-10 00:52:56 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:52:56.592674 | orchestrator | 2026-03-10 00:52:56 | INFO  | Task 9d433622-5c84-435e-b0d8-e4b49826e9b5 is in state SUCCESS 2026-03-10 00:52:56.594509 | orchestrator | 2026-03-10 00:52:56.594563 | orchestrator | 2026-03-10 00:52:56.594576 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 00:52:56.594588 | orchestrator | 2026-03-10 00:52:56.594599 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 00:52:56.594611 | orchestrator | Tuesday 10 March 2026 00:50:17 +0000 (0:00:00.166) 0:00:00.166 ********* 2026-03-10 00:52:56.594622 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:52:56.594634 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:52:56.594644 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:52:56.594655 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:52:56.594666 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:52:56.594721 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:52:56.594733 | orchestrator | 2026-03-10 00:52:56.595055 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 00:52:56.595075 | orchestrator | Tuesday 10 March 2026 00:50:18 +0000 (0:00:00.888) 0:00:01.055 ********* 2026-03-10 00:52:56.595115 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-10 00:52:56.595129 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-10 00:52:56.595141 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-10 00:52:56.595153 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-10 00:52:56.595166 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-10 00:52:56.595178 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-10 00:52:56.595191 | orchestrator | 2026-03-10 00:52:56.595203 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-10 00:52:56.595216 | orchestrator | 2026-03-10 00:52:56.595228 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-10 00:52:56.595241 | orchestrator | Tuesday 10 March 2026 00:50:19 +0000 (0:00:01.402) 0:00:02.458 ********* 2026-03-10 00:52:56.595255 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:52:56.595527 | orchestrator | 2026-03-10 00:52:56.595547 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-10 00:52:56.595572 | orchestrator | Tuesday 10 March 2026 00:50:20 +0000 (0:00:01.324) 0:00:03.782 ********* 2026-03-10 00:52:56.595587 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.595600 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.595612 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.595623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.595634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.595659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.595681 | orchestrator | 2026-03-10 00:52:56.595692 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-10 00:52:56.595703 | orchestrator | Tuesday 10 March 2026 00:50:22 +0000 (0:00:01.127) 0:00:04.910 ********* 2026-03-10 00:52:56.595714 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.595726 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.595743 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.595754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.595766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.595777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.595788 | orchestrator | 2026-03-10 00:52:56.595799 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-10 00:52:56.595810 | orchestrator | Tuesday 10 March 2026 00:50:23 +0000 (0:00:01.877) 0:00:06.788 ********* 2026-03-10 00:52:56.595822 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.595833 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.595869 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.595881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.595893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.595939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.595950 | orchestrator | 2026-03-10 00:52:56.595962 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-10 00:52:56.595973 | orchestrator | Tuesday 10 March 2026 00:50:26 +0000 (0:00:02.776) 0:00:09.564 ********* 2026-03-10 00:52:56.595984 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.595995 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.596006 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.596017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.596036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.596054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.596074 | orchestrator | 2026-03-10 00:52:56.596093 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-03-10 00:52:56.596112 | orchestrator | Tuesday 10 March 2026 00:50:29 +0000 (0:00:02.293) 0:00:11.858 ********* 2026-03-10 00:52:56.596130 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.596149 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.596177 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.596197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.596218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.596239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.596278 | orchestrator | 2026-03-10 00:52:56.596303 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-10 00:52:56.596323 | orchestrator | Tuesday 10 March 2026 00:50:30 +0000 (0:00:01.767) 0:00:13.625 ********* 2026-03-10 00:52:56.596342 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:52:56.596361 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:52:56.596381 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:52:56.596401 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:52:56.596422 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:52:56.596445 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:52:56.596464 | orchestrator | 2026-03-10 00:52:56.596486 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-10 00:52:56.596512 | orchestrator | Tuesday 10 March 2026 00:50:33 +0000 (0:00:02.814) 0:00:16.440 ********* 2026-03-10 00:52:56.596536 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-10 00:52:56.596556 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-10 00:52:56.596576 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-10 00:52:56.596607 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-10 00:52:56.596627 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-10 00:52:56.596648 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-10 00:52:56.596668 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-10 00:52:56.596688 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-10 00:52:56.596699 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-10 00:52:56.596710 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-10 00:52:56.596721 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-10 00:52:56.596731 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-10 00:52:56.596742 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-10 00:52:56.596755 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-10 00:52:56.596766 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-10 00:52:56.596777 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-10 00:52:56.596795 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-10 00:52:56.596806 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-10 00:52:56.596817 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-10 00:52:56.596829 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-10 00:52:56.596840 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-10 00:52:56.596850 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-10 00:52:56.596871 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-10 00:52:56.596882 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-10 00:52:56.596893 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-10 00:52:56.596935 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-10 00:52:56.596947 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-10 00:52:56.596957 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-10 00:52:56.596968 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-10 00:52:56.596979 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-10 00:52:56.596989 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-10 00:52:56.597000 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-10 00:52:56.597011 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-10 00:52:56.597022 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-10 00:52:56.597033 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-10 00:52:56.597044 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-10 00:52:56.597054 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-10 00:52:56.597065 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-10 00:52:56.597076 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-10 00:52:56.597086 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-10 00:52:56.597105 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-10 00:52:56.597116 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-10 00:52:56.597127 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-10 00:52:56.597138 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-10 00:52:56.597149 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-10 00:52:56.597159 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-10 00:52:56.597170 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-10 00:52:56.597181 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-10 00:52:56.597192 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-10 00:52:56.597203 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-10 00:52:56.597214 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-10 00:52:56.597233 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-10 00:52:56.597252 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-10 00:52:56.597272 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-10 00:52:56.597290 | orchestrator | 2026-03-10 00:52:56.597309 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-10 00:52:56.597326 | orchestrator | Tuesday 10 March 2026 00:50:54 +0000 (0:00:20.957) 0:00:37.398 ********* 2026-03-10 00:52:56.597344 | orchestrator | 2026-03-10 00:52:56.597361 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-10 00:52:56.597381 | orchestrator | Tuesday 10 March 2026 00:50:54 +0000 (0:00:00.118) 0:00:37.516 ********* 2026-03-10 00:52:56.597399 | orchestrator | 2026-03-10 00:52:56.597418 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-10 00:52:56.597437 | orchestrator | Tuesday 10 March 2026 00:50:54 +0000 (0:00:00.087) 0:00:37.604 ********* 2026-03-10 00:52:56.597456 | orchestrator | 2026-03-10 00:52:56.597473 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-10 00:52:56.597491 | orchestrator | Tuesday 10 March 2026 00:50:54 +0000 (0:00:00.074) 0:00:37.679 ********* 2026-03-10 00:52:56.597509 | orchestrator | 2026-03-10 00:52:56.597529 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-10 00:52:56.597543 | orchestrator | Tuesday 10 March 2026 00:50:54 +0000 (0:00:00.073) 0:00:37.753 ********* 2026-03-10 00:52:56.597554 | orchestrator | 2026-03-10 00:52:56.597565 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-10 00:52:56.597576 | orchestrator | Tuesday 10 March 2026 00:50:55 +0000 (0:00:00.178) 0:00:37.932 ********* 2026-03-10 00:52:56.597586 | orchestrator | 2026-03-10 00:52:56.597597 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-10 00:52:56.597608 | orchestrator | Tuesday 10 March 2026 00:50:55 +0000 (0:00:00.220) 0:00:38.153 ********* 2026-03-10 00:52:56.597618 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:52:56.597629 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:52:56.597641 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:52:56.597660 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:52:56.597679 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:52:56.597697 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:52:56.597716 | orchestrator | 2026-03-10 00:52:56.597733 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-10 00:52:56.597750 | orchestrator | Tuesday 10 March 2026 00:50:57 +0000 (0:00:01.857) 0:00:40.010 ********* 2026-03-10 00:52:56.597766 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:52:56.597783 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:52:56.597800 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:52:56.597820 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:52:56.597839 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:52:56.597857 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:52:56.597876 | orchestrator | 2026-03-10 00:52:56.597919 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-10 00:52:56.597939 | orchestrator | 2026-03-10 00:52:56.597959 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-10 00:52:56.597978 | orchestrator | Tuesday 10 March 2026 00:51:24 +0000 (0:00:27.487) 0:01:07.497 ********* 2026-03-10 00:52:56.597996 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:52:56.598076 | orchestrator | 2026-03-10 00:52:56.598092 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-10 00:52:56.598103 | orchestrator | Tuesday 10 March 2026 00:51:26 +0000 (0:00:01.698) 0:01:09.195 ********* 2026-03-10 00:52:56.598126 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:52:56.598138 | orchestrator | 2026-03-10 00:52:56.598160 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-10 00:52:56.598172 | orchestrator | Tuesday 10 March 2026 00:51:27 +0000 (0:00:00.741) 0:01:09.937 ********* 2026-03-10 00:52:56.598182 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:52:56.598194 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:52:56.598204 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:52:56.598215 | orchestrator | 2026-03-10 00:52:56.598226 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-10 00:52:56.598237 | orchestrator | Tuesday 10 March 2026 00:51:28 +0000 (0:00:01.356) 0:01:11.294 ********* 2026-03-10 00:52:56.598248 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:52:56.598259 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:52:56.598269 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:52:56.598280 | orchestrator | 2026-03-10 00:52:56.598291 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-10 00:52:56.598302 | orchestrator | Tuesday 10 March 2026 00:51:28 +0000 (0:00:00.433) 0:01:11.728 ********* 2026-03-10 00:52:56.598312 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:52:56.598323 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:52:56.598334 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:52:56.598345 | orchestrator | 2026-03-10 00:52:56.598355 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-10 00:52:56.598366 | orchestrator | Tuesday 10 March 2026 00:51:29 +0000 (0:00:00.521) 0:01:12.249 ********* 2026-03-10 00:52:56.598377 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:52:56.598388 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:52:56.598399 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:52:56.598409 | orchestrator | 2026-03-10 00:52:56.598420 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-10 00:52:56.598431 | orchestrator | Tuesday 10 March 2026 00:51:30 +0000 (0:00:00.700) 0:01:12.950 ********* 2026-03-10 00:52:56.598442 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:52:56.598453 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:52:56.598464 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:52:56.598474 | orchestrator | 2026-03-10 00:52:56.598485 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-10 00:52:56.598496 | orchestrator | Tuesday 10 March 2026 00:51:31 +0000 (0:00:00.958) 0:01:13.909 ********* 2026-03-10 00:52:56.598507 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:52:56.598518 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:52:56.598529 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:52:56.598539 | orchestrator | 2026-03-10 00:52:56.598550 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-10 00:52:56.598561 | orchestrator | Tuesday 10 March 2026 00:51:31 +0000 (0:00:00.571) 0:01:14.480 ********* 2026-03-10 00:52:56.598572 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:52:56.598583 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:52:56.598593 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:52:56.598604 | orchestrator | 2026-03-10 00:52:56.598615 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-10 00:52:56.598658 | orchestrator | Tuesday 10 March 2026 00:51:32 +0000 (0:00:00.409) 0:01:14.890 ********* 2026-03-10 00:52:56.598670 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:52:56.598681 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:52:56.598691 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:52:56.598702 | orchestrator | 2026-03-10 00:52:56.598713 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-10 00:52:56.598724 | orchestrator | Tuesday 10 March 2026 00:51:32 +0000 (0:00:00.365) 0:01:15.256 ********* 2026-03-10 00:52:56.598735 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:52:56.598746 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:52:56.598763 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:52:56.598774 | orchestrator | 2026-03-10 00:52:56.598785 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-10 00:52:56.598796 | orchestrator | Tuesday 10 March 2026 00:51:32 +0000 (0:00:00.547) 0:01:15.804 ********* 2026-03-10 00:52:56.598807 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:52:56.598818 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:52:56.598828 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:52:56.598839 | orchestrator | 2026-03-10 00:52:56.598850 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-10 00:52:56.598861 | orchestrator | Tuesday 10 March 2026 00:51:33 +0000 (0:00:00.408) 0:01:16.212 ********* 2026-03-10 00:52:56.598872 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:52:56.598883 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:52:56.598912 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:52:56.598924 | orchestrator | 2026-03-10 00:52:56.598935 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-10 00:52:56.598946 | orchestrator | Tuesday 10 March 2026 00:51:33 +0000 (0:00:00.323) 0:01:16.536 ********* 2026-03-10 00:52:56.598957 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:52:56.598968 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:52:56.598979 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:52:56.598990 | orchestrator | 2026-03-10 00:52:56.599001 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-10 00:52:56.599012 | orchestrator | Tuesday 10 March 2026 00:51:34 +0000 (0:00:00.528) 0:01:17.064 ********* 2026-03-10 00:52:56.599022 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:52:56.599033 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:52:56.599044 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:52:56.599055 | orchestrator | 2026-03-10 00:52:56.599066 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-10 00:52:56.599077 | orchestrator | Tuesday 10 March 2026 00:51:35 +0000 (0:00:00.803) 0:01:17.867 ********* 2026-03-10 00:52:56.599088 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:52:56.599099 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:52:56.599110 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:52:56.599120 | orchestrator | 2026-03-10 00:52:56.599131 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-10 00:52:56.599142 | orchestrator | Tuesday 10 March 2026 00:51:35 +0000 (0:00:00.605) 0:01:18.472 ********* 2026-03-10 00:52:56.599153 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:52:56.599164 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:52:56.599175 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:52:56.599186 | orchestrator | 2026-03-10 00:52:56.599204 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-10 00:52:56.599216 | orchestrator | Tuesday 10 March 2026 00:51:36 +0000 (0:00:00.584) 0:01:19.059 ********* 2026-03-10 00:52:56.599226 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:52:56.599237 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:52:56.599248 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:52:56.599259 | orchestrator | 2026-03-10 00:52:56.599270 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-10 00:52:56.599281 | orchestrator | Tuesday 10 March 2026 00:51:37 +0000 (0:00:00.848) 0:01:19.908 ********* 2026-03-10 00:52:56.599292 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:52:56.599303 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:52:56.599314 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:52:56.599324 | orchestrator | 2026-03-10 00:52:56.599335 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-10 00:52:56.599346 | orchestrator | Tuesday 10 March 2026 00:51:37 +0000 (0:00:00.316) 0:01:20.224 ********* 2026-03-10 00:52:56.599357 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:52:56.599368 | orchestrator | 2026-03-10 00:52:56.599386 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-10 00:52:56.599397 | orchestrator | Tuesday 10 March 2026 00:51:38 +0000 (0:00:01.054) 0:01:21.278 ********* 2026-03-10 00:52:56.599408 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:52:56.599419 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:52:56.599430 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:52:56.599441 | orchestrator | 2026-03-10 00:52:56.599452 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-10 00:52:56.599462 | orchestrator | Tuesday 10 March 2026 00:51:39 +0000 (0:00:00.684) 0:01:21.963 ********* 2026-03-10 00:52:56.599473 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:52:56.599485 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:52:56.599495 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:52:56.599506 | orchestrator | 2026-03-10 00:52:56.599517 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-10 00:52:56.599533 | orchestrator | Tuesday 10 March 2026 00:51:39 +0000 (0:00:00.503) 0:01:22.466 ********* 2026-03-10 00:52:56.599544 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:52:56.599555 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:52:56.599566 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:52:56.599577 | orchestrator | 2026-03-10 00:52:56.599588 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-10 00:52:56.599599 | orchestrator | Tuesday 10 March 2026 00:51:40 +0000 (0:00:00.591) 0:01:23.057 ********* 2026-03-10 00:52:56.599610 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:52:56.599621 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:52:56.599632 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:52:56.599642 | orchestrator | 2026-03-10 00:52:56.599653 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-10 00:52:56.599664 | orchestrator | Tuesday 10 March 2026 00:51:40 +0000 (0:00:00.425) 0:01:23.482 ********* 2026-03-10 00:52:56.599675 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:52:56.599686 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:52:56.599697 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:52:56.599708 | orchestrator | 2026-03-10 00:52:56.599719 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-10 00:52:56.599730 | orchestrator | Tuesday 10 March 2026 00:51:41 +0000 (0:00:00.479) 0:01:23.962 ********* 2026-03-10 00:52:56.599740 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:52:56.599751 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:52:56.599762 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:52:56.599773 | orchestrator | 2026-03-10 00:52:56.599784 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-10 00:52:56.599795 | orchestrator | Tuesday 10 March 2026 00:51:41 +0000 (0:00:00.405) 0:01:24.368 ********* 2026-03-10 00:52:56.599806 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:52:56.599817 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:52:56.599828 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:52:56.599839 | orchestrator | 2026-03-10 00:52:56.599850 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-10 00:52:56.599861 | orchestrator | Tuesday 10 March 2026 00:51:42 +0000 (0:00:00.745) 0:01:25.113 ********* 2026-03-10 00:52:56.599872 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:52:56.599883 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:52:56.599893 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:52:56.599931 | orchestrator | 2026-03-10 00:52:56.599942 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-10 00:52:56.599953 | orchestrator | Tuesday 10 March 2026 00:51:42 +0000 (0:00:00.431) 0:01:25.545 ********* 2026-03-10 00:52:56.599965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.599986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.600004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.600017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.600031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.600052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.600064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.600075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.600086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.600097 | orchestrator | 2026-03-10 00:52:56.600109 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-10 00:52:56.600120 | orchestrator | Tuesday 10 March 2026 00:51:44 +0000 (0:00:01.789) 0:01:27.334 ********* 2026-03-10 00:52:56.600131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.600158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.600170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.600187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.600199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.600210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.600226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.600238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.600249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.600260 | orchestrator | 2026-03-10 00:52:56.600271 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-10 00:52:56.600282 | orchestrator | Tuesday 10 March 2026 00:51:49 +0000 (0:00:04.787) 0:01:32.122 ********* 2026-03-10 00:52:56.600293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.600317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.600344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.600381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.600400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.600418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.600437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.600464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.600483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.600502 | orchestrator | 2026-03-10 00:52:56.600522 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-10 00:52:56.600540 | orchestrator | Tuesday 10 March 2026 00:51:51 +0000 (0:00:02.460) 0:01:34.582 ********* 2026-03-10 00:52:56.600572 | orchestrator | 2026-03-10 00:52:56.600593 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-10 00:52:56.600615 | orchestrator | Tuesday 10 March 2026 00:51:51 +0000 (0:00:00.071) 0:01:34.654 ********* 2026-03-10 00:52:56.600635 | orchestrator | 2026-03-10 00:52:56.600651 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-10 00:52:56.600663 | orchestrator | Tuesday 10 March 2026 00:51:51 +0000 (0:00:00.080) 0:01:34.735 ********* 2026-03-10 00:52:56.600673 | orchestrator | 2026-03-10 00:52:56.600684 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-10 00:52:56.600695 | orchestrator | Tuesday 10 March 2026 00:51:51 +0000 (0:00:00.075) 0:01:34.810 ********* 2026-03-10 00:52:56.600705 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:52:56.600716 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:52:56.600727 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:52:56.600738 | orchestrator | 2026-03-10 00:52:56.600748 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-10 00:52:56.600759 | orchestrator | Tuesday 10 March 2026 00:51:56 +0000 (0:00:04.919) 0:01:39.730 ********* 2026-03-10 00:52:56.600769 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:52:56.600780 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:52:56.600791 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:52:56.600801 | orchestrator | 2026-03-10 00:52:56.600812 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-10 00:52:56.600823 | orchestrator | Tuesday 10 March 2026 00:52:04 +0000 (0:00:07.647) 0:01:47.377 ********* 2026-03-10 00:52:56.600833 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:52:56.600844 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:52:56.600855 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:52:56.600865 | orchestrator | 2026-03-10 00:52:56.600876 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-10 00:52:56.600886 | orchestrator | Tuesday 10 March 2026 00:52:12 +0000 (0:00:08.383) 0:01:55.761 ********* 2026-03-10 00:52:56.600919 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:52:56.600930 | orchestrator | 2026-03-10 00:52:56.600941 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-10 00:52:56.600952 | orchestrator | Tuesday 10 March 2026 00:52:13 +0000 (0:00:00.140) 0:01:55.901 ********* 2026-03-10 00:52:56.600962 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:52:56.600974 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:52:56.600985 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:52:56.600995 | orchestrator | 2026-03-10 00:52:56.601015 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-10 00:52:56.601026 | orchestrator | Tuesday 10 March 2026 00:52:13 +0000 (0:00:00.819) 0:01:56.721 ********* 2026-03-10 00:52:56.601037 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:52:56.601048 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:52:56.601059 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:52:56.601070 | orchestrator | 2026-03-10 00:52:56.601080 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-10 00:52:56.601091 | orchestrator | Tuesday 10 March 2026 00:52:14 +0000 (0:00:00.663) 0:01:57.384 ********* 2026-03-10 00:52:56.601102 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:52:56.601113 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:52:56.601123 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:52:56.601134 | orchestrator | 2026-03-10 00:52:56.601145 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-10 00:52:56.601155 | orchestrator | Tuesday 10 March 2026 00:52:15 +0000 (0:00:00.827) 0:01:58.211 ********* 2026-03-10 00:52:56.601166 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:52:56.601177 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:52:56.601188 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:52:56.601198 | orchestrator | 2026-03-10 00:52:56.601209 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-10 00:52:56.601220 | orchestrator | Tuesday 10 March 2026 00:52:16 +0000 (0:00:00.908) 0:01:59.120 ********* 2026-03-10 00:52:56.601238 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:52:56.601249 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:52:56.601259 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:52:56.601270 | orchestrator | 2026-03-10 00:52:56.601281 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-10 00:52:56.601292 | orchestrator | Tuesday 10 March 2026 00:52:17 +0000 (0:00:00.845) 0:01:59.966 ********* 2026-03-10 00:52:56.601302 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:52:56.601313 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:52:56.601323 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:52:56.601334 | orchestrator | 2026-03-10 00:52:56.601345 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-10 00:52:56.601355 | orchestrator | Tuesday 10 March 2026 00:52:17 +0000 (0:00:00.715) 0:02:00.682 ********* 2026-03-10 00:52:56.601366 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:52:56.601382 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:52:56.601393 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:52:56.601404 | orchestrator | 2026-03-10 00:52:56.601415 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-10 00:52:56.601426 | orchestrator | Tuesday 10 March 2026 00:52:18 +0000 (0:00:00.326) 0:02:01.008 ********* 2026-03-10 00:52:56.601437 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.601449 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.601460 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.601472 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.601483 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.601495 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.601511 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.601529 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.601540 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.601551 | orchestrator | 2026-03-10 00:52:56.601562 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-10 00:52:56.601573 | orchestrator | Tuesday 10 March 2026 00:52:19 +0000 (0:00:01.474) 0:02:02.483 ********* 2026-03-10 00:52:56.601589 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.601601 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.601612 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.601623 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.601635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.601646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.601669 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.601688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.601699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.601710 | orchestrator | 2026-03-10 00:52:56.601721 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-10 00:52:56.601732 | orchestrator | Tuesday 10 March 2026 00:52:24 +0000 (0:00:04.670) 0:02:07.154 ********* 2026-03-10 00:52:56.601748 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.601759 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.601771 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.601782 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.601793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.601804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.601824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.601843 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.601855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:52:56.601866 | orchestrator | 2026-03-10 00:52:56.601879 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-10 00:52:56.601957 | orchestrator | Tuesday 10 March 2026 00:52:27 +0000 (0:00:03.439) 0:02:10.593 ********* 2026-03-10 00:52:56.601979 | orchestrator | 2026-03-10 00:52:56.601997 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-10 00:52:56.602076 | orchestrator | Tuesday 10 March 2026 00:52:27 +0000 (0:00:00.071) 0:02:10.665 ********* 2026-03-10 00:52:56.602093 | orchestrator | 2026-03-10 00:52:56.602103 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-10 00:52:56.602113 | orchestrator | Tuesday 10 March 2026 00:52:27 +0000 (0:00:00.072) 0:02:10.737 ********* 2026-03-10 00:52:56.602122 | orchestrator | 2026-03-10 00:52:56.602132 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-10 00:52:56.602141 | orchestrator | Tuesday 10 March 2026 00:52:27 +0000 (0:00:00.077) 0:02:10.814 ********* 2026-03-10 00:52:56.602150 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:52:56.602160 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:52:56.602169 | orchestrator | 2026-03-10 00:52:56.602185 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-10 00:52:56.602195 | orchestrator | Tuesday 10 March 2026 00:52:34 +0000 (0:00:06.323) 0:02:17.138 ********* 2026-03-10 00:52:56.602205 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:52:56.602215 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:52:56.602224 | orchestrator | 2026-03-10 00:52:56.602234 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-10 00:52:56.602243 | orchestrator | Tuesday 10 March 2026 00:52:40 +0000 (0:00:06.603) 0:02:23.742 ********* 2026-03-10 00:52:56.602253 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:52:56.602262 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:52:56.602272 | orchestrator | 2026-03-10 00:52:56.602281 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-10 00:52:56.602291 | orchestrator | Tuesday 10 March 2026 00:52:47 +0000 (0:00:06.823) 0:02:30.565 ********* 2026-03-10 00:52:56.602300 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:52:56.602309 | orchestrator | 2026-03-10 00:52:56.602319 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-10 00:52:56.602329 | orchestrator | Tuesday 10 March 2026 00:52:47 +0000 (0:00:00.142) 0:02:30.707 ********* 2026-03-10 00:52:56.602338 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:52:56.602348 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:52:56.602357 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:52:56.602366 | orchestrator | 2026-03-10 00:52:56.602376 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-10 00:52:56.602394 | orchestrator | Tuesday 10 March 2026 00:52:48 +0000 (0:00:01.003) 0:02:31.711 ********* 2026-03-10 00:52:56.602404 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:52:56.602413 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:52:56.602423 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:52:56.602432 | orchestrator | 2026-03-10 00:52:56.602442 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-10 00:52:56.602451 | orchestrator | Tuesday 10 March 2026 00:52:49 +0000 (0:00:00.664) 0:02:32.375 ********* 2026-03-10 00:52:56.602466 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:52:56.602484 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:52:56.602500 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:52:56.602515 | orchestrator | 2026-03-10 00:52:56.602531 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-10 00:52:56.602544 | orchestrator | Tuesday 10 March 2026 00:52:50 +0000 (0:00:00.838) 0:02:33.214 ********* 2026-03-10 00:52:56.602559 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:52:56.602574 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:52:56.602590 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:52:56.602605 | orchestrator | 2026-03-10 00:52:56.602621 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-10 00:52:56.602637 | orchestrator | Tuesday 10 March 2026 00:52:51 +0000 (0:00:00.706) 0:02:33.920 ********* 2026-03-10 00:52:56.602653 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:52:56.602669 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:52:56.602686 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:52:56.602703 | orchestrator | 2026-03-10 00:52:56.602721 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-10 00:52:56.602738 | orchestrator | Tuesday 10 March 2026 00:52:51 +0000 (0:00:00.907) 0:02:34.828 ********* 2026-03-10 00:52:56.602755 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:52:56.602768 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:52:56.602778 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:52:56.602787 | orchestrator | 2026-03-10 00:52:56.602797 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:52:56.602807 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-10 00:52:56.602818 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-10 00:52:56.602838 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-10 00:52:56.602848 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:52:56.602859 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:52:56.602868 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:52:56.602878 | orchestrator | 2026-03-10 00:52:56.602888 | orchestrator | 2026-03-10 00:52:56.602915 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:52:56.602925 | orchestrator | Tuesday 10 March 2026 00:52:52 +0000 (0:00:01.023) 0:02:35.851 ********* 2026-03-10 00:52:56.602935 | orchestrator | =============================================================================== 2026-03-10 00:52:56.602945 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 27.49s 2026-03-10 00:52:56.602954 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.96s 2026-03-10 00:52:56.602964 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 15.21s 2026-03-10 00:52:56.602973 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.25s 2026-03-10 00:52:56.602996 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 11.24s 2026-03-10 00:52:56.603006 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.79s 2026-03-10 00:52:56.603016 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.67s 2026-03-10 00:52:56.603031 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.44s 2026-03-10 00:52:56.603041 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.81s 2026-03-10 00:52:56.603050 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 2.78s 2026-03-10 00:52:56.603060 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.46s 2026-03-10 00:52:56.603070 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.29s 2026-03-10 00:52:56.603079 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.88s 2026-03-10 00:52:56.603089 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.86s 2026-03-10 00:52:56.603098 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.79s 2026-03-10 00:52:56.603108 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.77s 2026-03-10 00:52:56.603117 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 1.70s 2026-03-10 00:52:56.603127 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.47s 2026-03-10 00:52:56.603136 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.40s 2026-03-10 00:52:56.603146 | orchestrator | ovn-db : Checking for any existing OVN DB container volumes ------------- 1.36s 2026-03-10 00:52:56.603155 | orchestrator | 2026-03-10 00:52:56 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:52:56.603165 | orchestrator | 2026-03-10 00:52:56 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:59.637468 | orchestrator | 2026-03-10 00:52:59 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:52:59.639558 | orchestrator | 2026-03-10 00:52:59 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:52:59.639856 | orchestrator | 2026-03-10 00:52:59 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:02.685936 | orchestrator | 2026-03-10 00:53:02 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:53:02.686069 | orchestrator | 2026-03-10 00:53:02 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:53:02.686079 | orchestrator | 2026-03-10 00:53:02 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:05.722667 | orchestrator | 2026-03-10 00:53:05 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:53:05.725552 | orchestrator | 2026-03-10 00:53:05 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:53:05.725622 | orchestrator | 2026-03-10 00:53:05 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:08.782240 | orchestrator | 2026-03-10 00:53:08 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:53:08.785476 | orchestrator | 2026-03-10 00:53:08 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:53:08.785548 | orchestrator | 2026-03-10 00:53:08 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:11.837468 | orchestrator | 2026-03-10 00:53:11 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:53:11.839040 | orchestrator | 2026-03-10 00:53:11 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:53:11.839161 | orchestrator | 2026-03-10 00:53:11 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:14.887472 | orchestrator | 2026-03-10 00:53:14 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:53:14.889650 | orchestrator | 2026-03-10 00:53:14 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:53:14.890006 | orchestrator | 2026-03-10 00:53:14 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:17.936045 | orchestrator | 2026-03-10 00:53:17 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:53:17.936493 | orchestrator | 2026-03-10 00:53:17 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:53:17.936527 | orchestrator | 2026-03-10 00:53:17 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:20.984594 | orchestrator | 2026-03-10 00:53:20 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:53:20.986528 | orchestrator | 2026-03-10 00:53:20 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:53:20.986603 | orchestrator | 2026-03-10 00:53:20 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:24.045498 | orchestrator | 2026-03-10 00:53:24 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:53:24.046221 | orchestrator | 2026-03-10 00:53:24 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:53:24.046252 | orchestrator | 2026-03-10 00:53:24 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:27.089950 | orchestrator | 2026-03-10 00:53:27 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:53:27.091457 | orchestrator | 2026-03-10 00:53:27 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:53:27.091515 | orchestrator | 2026-03-10 00:53:27 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:30.143802 | orchestrator | 2026-03-10 00:53:30 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:53:30.145435 | orchestrator | 2026-03-10 00:53:30 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:53:30.145506 | orchestrator | 2026-03-10 00:53:30 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:33.184166 | orchestrator | 2026-03-10 00:53:33 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:53:33.184625 | orchestrator | 2026-03-10 00:53:33 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:53:33.184656 | orchestrator | 2026-03-10 00:53:33 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:36.236003 | orchestrator | 2026-03-10 00:53:36 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:53:36.237111 | orchestrator | 2026-03-10 00:53:36 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:53:36.237790 | orchestrator | 2026-03-10 00:53:36 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:39.276875 | orchestrator | 2026-03-10 00:53:39 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:53:39.278732 | orchestrator | 2026-03-10 00:53:39 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:53:39.278773 | orchestrator | 2026-03-10 00:53:39 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:42.326598 | orchestrator | 2026-03-10 00:53:42 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:53:42.328882 | orchestrator | 2026-03-10 00:53:42 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:53:42.328960 | orchestrator | 2026-03-10 00:53:42 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:45.377569 | orchestrator | 2026-03-10 00:53:45 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:53:45.378116 | orchestrator | 2026-03-10 00:53:45 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:53:45.378162 | orchestrator | 2026-03-10 00:53:45 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:48.413582 | orchestrator | 2026-03-10 00:53:48 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:53:48.416194 | orchestrator | 2026-03-10 00:53:48 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:53:48.416483 | orchestrator | 2026-03-10 00:53:48 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:51.471571 | orchestrator | 2026-03-10 00:53:51 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:53:51.474929 | orchestrator | 2026-03-10 00:53:51 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:53:51.475416 | orchestrator | 2026-03-10 00:53:51 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:54.518306 | orchestrator | 2026-03-10 00:53:54 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:53:54.519135 | orchestrator | 2026-03-10 00:53:54 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:53:54.519191 | orchestrator | 2026-03-10 00:53:54 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:57.541639 | orchestrator | 2026-03-10 00:53:57 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:53:57.545120 | orchestrator | 2026-03-10 00:53:57 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:53:57.546641 | orchestrator | 2026-03-10 00:53:57 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:00.576362 | orchestrator | 2026-03-10 00:54:00 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:54:00.578621 | orchestrator | 2026-03-10 00:54:00 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:54:00.578697 | orchestrator | 2026-03-10 00:54:00 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:03.618932 | orchestrator | 2026-03-10 00:54:03 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:54:03.622953 | orchestrator | 2026-03-10 00:54:03 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:54:03.623009 | orchestrator | 2026-03-10 00:54:03 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:06.653889 | orchestrator | 2026-03-10 00:54:06 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:54:06.654273 | orchestrator | 2026-03-10 00:54:06 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:54:06.654304 | orchestrator | 2026-03-10 00:54:06 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:09.700552 | orchestrator | 2026-03-10 00:54:09 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:54:09.702721 | orchestrator | 2026-03-10 00:54:09 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:54:09.703010 | orchestrator | 2026-03-10 00:54:09 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:12.744883 | orchestrator | 2026-03-10 00:54:12 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:54:12.746399 | orchestrator | 2026-03-10 00:54:12 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:54:12.746469 | orchestrator | 2026-03-10 00:54:12 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:15.790767 | orchestrator | 2026-03-10 00:54:15 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:54:15.791720 | orchestrator | 2026-03-10 00:54:15 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:54:15.791781 | orchestrator | 2026-03-10 00:54:15 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:18.840306 | orchestrator | 2026-03-10 00:54:18 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:54:18.842848 | orchestrator | 2026-03-10 00:54:18 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:54:18.842898 | orchestrator | 2026-03-10 00:54:18 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:21.889144 | orchestrator | 2026-03-10 00:54:21 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:54:21.889237 | orchestrator | 2026-03-10 00:54:21 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:54:21.889252 | orchestrator | 2026-03-10 00:54:21 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:24.927863 | orchestrator | 2026-03-10 00:54:24 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:54:24.929829 | orchestrator | 2026-03-10 00:54:24 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:54:24.929864 | orchestrator | 2026-03-10 00:54:24 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:27.978451 | orchestrator | 2026-03-10 00:54:27 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:54:27.979457 | orchestrator | 2026-03-10 00:54:27 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:54:27.979664 | orchestrator | 2026-03-10 00:54:27 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:31.031890 | orchestrator | 2026-03-10 00:54:31 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:54:31.037742 | orchestrator | 2026-03-10 00:54:31 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:54:31.037831 | orchestrator | 2026-03-10 00:54:31 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:34.079638 | orchestrator | 2026-03-10 00:54:34 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:54:34.081797 | orchestrator | 2026-03-10 00:54:34 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:54:34.082005 | orchestrator | 2026-03-10 00:54:34 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:37.121639 | orchestrator | 2026-03-10 00:54:37 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:54:37.121925 | orchestrator | 2026-03-10 00:54:37 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:54:37.121945 | orchestrator | 2026-03-10 00:54:37 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:40.178489 | orchestrator | 2026-03-10 00:54:40 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:54:40.179104 | orchestrator | 2026-03-10 00:54:40 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:54:40.179254 | orchestrator | 2026-03-10 00:54:40 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:43.223634 | orchestrator | 2026-03-10 00:54:43 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:54:43.225705 | orchestrator | 2026-03-10 00:54:43 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:54:43.225849 | orchestrator | 2026-03-10 00:54:43 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:46.273102 | orchestrator | 2026-03-10 00:54:46 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:54:46.274424 | orchestrator | 2026-03-10 00:54:46 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:54:46.274736 | orchestrator | 2026-03-10 00:54:46 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:49.314125 | orchestrator | 2026-03-10 00:54:49 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:54:49.316200 | orchestrator | 2026-03-10 00:54:49 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:54:49.316250 | orchestrator | 2026-03-10 00:54:49 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:52.378339 | orchestrator | 2026-03-10 00:54:52 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:54:52.381915 | orchestrator | 2026-03-10 00:54:52 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:54:52.381995 | orchestrator | 2026-03-10 00:54:52 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:55.425652 | orchestrator | 2026-03-10 00:54:55 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:54:55.427645 | orchestrator | 2026-03-10 00:54:55 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:54:55.427706 | orchestrator | 2026-03-10 00:54:55 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:58.477190 | orchestrator | 2026-03-10 00:54:58 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:54:58.480722 | orchestrator | 2026-03-10 00:54:58 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:54:58.481305 | orchestrator | 2026-03-10 00:54:58 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:01.528820 | orchestrator | 2026-03-10 00:55:01 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:55:01.529570 | orchestrator | 2026-03-10 00:55:01 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:55:01.529597 | orchestrator | 2026-03-10 00:55:01 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:04.579841 | orchestrator | 2026-03-10 00:55:04 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:55:04.584839 | orchestrator | 2026-03-10 00:55:04 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:55:04.585525 | orchestrator | 2026-03-10 00:55:04 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:07.688470 | orchestrator | 2026-03-10 00:55:07 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:55:07.689128 | orchestrator | 2026-03-10 00:55:07 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:55:07.689198 | orchestrator | 2026-03-10 00:55:07 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:10.733297 | orchestrator | 2026-03-10 00:55:10 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:55:10.735533 | orchestrator | 2026-03-10 00:55:10 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:55:10.735607 | orchestrator | 2026-03-10 00:55:10 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:13.787170 | orchestrator | 2026-03-10 00:55:13 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:55:13.789956 | orchestrator | 2026-03-10 00:55:13 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:55:13.790079 | orchestrator | 2026-03-10 00:55:13 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:16.835568 | orchestrator | 2026-03-10 00:55:16 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:55:16.836481 | orchestrator | 2026-03-10 00:55:16 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:55:16.836516 | orchestrator | 2026-03-10 00:55:16 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:19.879063 | orchestrator | 2026-03-10 00:55:19 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:55:19.881999 | orchestrator | 2026-03-10 00:55:19 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:55:19.882923 | orchestrator | 2026-03-10 00:55:19 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:22.973411 | orchestrator | 2026-03-10 00:55:22 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:55:22.973576 | orchestrator | 2026-03-10 00:55:22 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:55:22.973984 | orchestrator | 2026-03-10 00:55:22 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:26.013273 | orchestrator | 2026-03-10 00:55:26 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:55:26.013379 | orchestrator | 2026-03-10 00:55:26 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:55:26.013395 | orchestrator | 2026-03-10 00:55:26 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:29.081148 | orchestrator | 2026-03-10 00:55:29 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:55:29.083188 | orchestrator | 2026-03-10 00:55:29 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:55:29.083239 | orchestrator | 2026-03-10 00:55:29 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:32.131232 | orchestrator | 2026-03-10 00:55:32 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:55:32.131338 | orchestrator | 2026-03-10 00:55:32 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:55:32.131936 | orchestrator | 2026-03-10 00:55:32 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:35.167285 | orchestrator | 2026-03-10 00:55:35 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:55:35.169132 | orchestrator | 2026-03-10 00:55:35 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:55:35.169195 | orchestrator | 2026-03-10 00:55:35 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:38.221009 | orchestrator | 2026-03-10 00:55:38 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:55:38.221087 | orchestrator | 2026-03-10 00:55:38 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:55:38.221094 | orchestrator | 2026-03-10 00:55:38 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:41.263748 | orchestrator | 2026-03-10 00:55:41 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:55:41.264903 | orchestrator | 2026-03-10 00:55:41 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:55:41.264924 | orchestrator | 2026-03-10 00:55:41 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:44.309397 | orchestrator | 2026-03-10 00:55:44 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:55:44.311905 | orchestrator | 2026-03-10 00:55:44 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:55:44.311980 | orchestrator | 2026-03-10 00:55:44 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:47.348943 | orchestrator | 2026-03-10 00:55:47 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:55:47.350469 | orchestrator | 2026-03-10 00:55:47 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:55:47.350521 | orchestrator | 2026-03-10 00:55:47 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:50.390327 | orchestrator | 2026-03-10 00:55:50 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:55:50.392449 | orchestrator | 2026-03-10 00:55:50 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:55:50.392526 | orchestrator | 2026-03-10 00:55:50 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:53.438516 | orchestrator | 2026-03-10 00:55:53 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:55:53.441625 | orchestrator | 2026-03-10 00:55:53 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:55:53.442960 | orchestrator | 2026-03-10 00:55:53 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:56.481561 | orchestrator | 2026-03-10 00:55:56 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:55:56.484299 | orchestrator | 2026-03-10 00:55:56 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:55:56.484358 | orchestrator | 2026-03-10 00:55:56 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:59.535860 | orchestrator | 2026-03-10 00:55:59 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:55:59.538131 | orchestrator | 2026-03-10 00:55:59 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:55:59.538175 | orchestrator | 2026-03-10 00:55:59 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:02.585033 | orchestrator | 2026-03-10 00:56:02 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:56:02.586108 | orchestrator | 2026-03-10 00:56:02 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:56:02.586161 | orchestrator | 2026-03-10 00:56:02 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:05.628408 | orchestrator | 2026-03-10 00:56:05 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:56:05.629991 | orchestrator | 2026-03-10 00:56:05 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:56:05.630070 | orchestrator | 2026-03-10 00:56:05 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:08.678438 | orchestrator | 2026-03-10 00:56:08 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:56:08.682012 | orchestrator | 2026-03-10 00:56:08 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:56:08.682311 | orchestrator | 2026-03-10 00:56:08 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:11.722940 | orchestrator | 2026-03-10 00:56:11 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:56:11.724747 | orchestrator | 2026-03-10 00:56:11 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:56:11.724818 | orchestrator | 2026-03-10 00:56:11 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:14.768452 | orchestrator | 2026-03-10 00:56:14 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:56:14.769887 | orchestrator | 2026-03-10 00:56:14 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:56:14.770153 | orchestrator | 2026-03-10 00:56:14 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:17.807439 | orchestrator | 2026-03-10 00:56:17 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:56:17.808806 | orchestrator | 2026-03-10 00:56:17 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:56:17.809086 | orchestrator | 2026-03-10 00:56:17 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:20.841123 | orchestrator | 2026-03-10 00:56:20 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:56:20.841423 | orchestrator | 2026-03-10 00:56:20 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:56:20.841444 | orchestrator | 2026-03-10 00:56:20 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:23.878672 | orchestrator | 2026-03-10 00:56:23 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state STARTED 2026-03-10 00:56:23.879107 | orchestrator | 2026-03-10 00:56:23 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:56:23.879258 | orchestrator | 2026-03-10 00:56:23 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:26.928027 | orchestrator | 2026-03-10 00:56:26.928200 | orchestrator | 2026-03-10 00:56:26.928219 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 00:56:26.928232 | orchestrator | 2026-03-10 00:56:26.928243 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 00:56:26.928255 | orchestrator | Tuesday 10 March 2026 00:49:01 +0000 (0:00:00.285) 0:00:00.285 ********* 2026-03-10 00:56:26.928266 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:56:26.928308 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:56:26.928321 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:56:26.928332 | orchestrator | 2026-03-10 00:56:26.928343 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 00:56:26.928355 | orchestrator | Tuesday 10 March 2026 00:49:02 +0000 (0:00:00.384) 0:00:00.670 ********* 2026-03-10 00:56:26.928382 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-10 00:56:26.928393 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-10 00:56:26.928404 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-10 00:56:26.928415 | orchestrator | 2026-03-10 00:56:26.928426 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-10 00:56:26.928437 | orchestrator | 2026-03-10 00:56:26.928448 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-10 00:56:26.928489 | orchestrator | Tuesday 10 March 2026 00:49:02 +0000 (0:00:00.587) 0:00:01.257 ********* 2026-03-10 00:56:26.928502 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:56:26.928515 | orchestrator | 2026-03-10 00:56:26.928527 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-10 00:56:26.928540 | orchestrator | Tuesday 10 March 2026 00:49:03 +0000 (0:00:00.611) 0:00:01.869 ********* 2026-03-10 00:56:26.928608 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:56:26.928653 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:56:26.928682 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:56:26.928887 | orchestrator | 2026-03-10 00:56:26.928912 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-10 00:56:26.928932 | orchestrator | Tuesday 10 March 2026 00:49:04 +0000 (0:00:00.726) 0:00:02.595 ********* 2026-03-10 00:56:26.928950 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:56:26.928967 | orchestrator | 2026-03-10 00:56:26.928985 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-10 00:56:26.929003 | orchestrator | Tuesday 10 March 2026 00:49:05 +0000 (0:00:01.353) 0:00:03.948 ********* 2026-03-10 00:56:26.929022 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:56:26.929042 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:56:26.929061 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:56:26.929080 | orchestrator | 2026-03-10 00:56:26.929100 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-10 00:56:26.929119 | orchestrator | Tuesday 10 March 2026 00:49:06 +0000 (0:00:00.752) 0:00:04.701 ********* 2026-03-10 00:56:26.929139 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-10 00:56:26.929159 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-10 00:56:26.929180 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-10 00:56:26.929196 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-10 00:56:26.929207 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-10 00:56:26.929218 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-10 00:56:26.929230 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-10 00:56:26.929241 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-10 00:56:26.929291 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-10 00:56:26.929303 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-10 00:56:26.929315 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-10 00:56:26.929344 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-10 00:56:26.929405 | orchestrator | 2026-03-10 00:56:26.929416 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-10 00:56:26.929428 | orchestrator | Tuesday 10 March 2026 00:49:11 +0000 (0:00:04.984) 0:00:09.685 ********* 2026-03-10 00:56:26.929439 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-10 00:56:26.929512 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-10 00:56:26.929535 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-10 00:56:26.929557 | orchestrator | 2026-03-10 00:56:26.929577 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-10 00:56:26.929596 | orchestrator | Tuesday 10 March 2026 00:49:12 +0000 (0:00:01.063) 0:00:10.748 ********* 2026-03-10 00:56:26.929615 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-10 00:56:26.929675 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-10 00:56:26.929688 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-10 00:56:26.929699 | orchestrator | 2026-03-10 00:56:26.929710 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-10 00:56:26.929721 | orchestrator | Tuesday 10 March 2026 00:49:14 +0000 (0:00:02.682) 0:00:13.431 ********* 2026-03-10 00:56:26.929732 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-10 00:56:26.929763 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.929796 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-10 00:56:26.929809 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.929878 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-10 00:56:26.929891 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.929902 | orchestrator | 2026-03-10 00:56:26.929913 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-10 00:56:26.929923 | orchestrator | Tuesday 10 March 2026 00:49:16 +0000 (0:00:01.933) 0:00:15.365 ********* 2026-03-10 00:56:26.929938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-10 00:56:26.929958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-10 00:56:26.930010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-10 00:56:26.930264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-10 00:56:26.930344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-10 00:56:26.930385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-10 00:56:26.930433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-10 00:56:26.930455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-10 00:56:26.930468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-10 00:56:26.930479 | orchestrator | 2026-03-10 00:56:26.930490 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-10 00:56:26.930501 | orchestrator | Tuesday 10 March 2026 00:49:18 +0000 (0:00:01.757) 0:00:17.122 ********* 2026-03-10 00:56:26.930512 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.930524 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.930535 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.930545 | orchestrator | 2026-03-10 00:56:26.930556 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-10 00:56:26.930567 | orchestrator | Tuesday 10 March 2026 00:49:20 +0000 (0:00:01.944) 0:00:19.067 ********* 2026-03-10 00:56:26.930577 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-10 00:56:26.930588 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-10 00:56:26.930599 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-10 00:56:26.930609 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-10 00:56:26.930620 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-10 00:56:26.930704 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-10 00:56:26.930718 | orchestrator | 2026-03-10 00:56:26.930729 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-10 00:56:26.930739 | orchestrator | Tuesday 10 March 2026 00:49:23 +0000 (0:00:03.319) 0:00:22.386 ********* 2026-03-10 00:56:26.930749 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.930759 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.930768 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.930778 | orchestrator | 2026-03-10 00:56:26.930787 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-10 00:56:26.930797 | orchestrator | Tuesday 10 March 2026 00:49:26 +0000 (0:00:02.606) 0:00:24.993 ********* 2026-03-10 00:56:26.930815 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:56:26.930825 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:56:26.930896 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:56:26.930917 | orchestrator | 2026-03-10 00:56:26.930927 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-10 00:56:26.930937 | orchestrator | Tuesday 10 March 2026 00:49:29 +0000 (0:00:03.204) 0:00:28.198 ********* 2026-03-10 00:56:26.930948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-10 00:56:26.931011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:56:26.931031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:56:26.931043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e6a5a87842f9b59fa51b9a5ca5ebd6ac3c8122f7', '__omit_place_holder__e6a5a87842f9b59fa51b9a5ca5ebd6ac3c8122f7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-10 00:56:26.931054 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.931064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-10 00:56:26.931075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:56:26.931092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:56:26.931109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e6a5a87842f9b59fa51b9a5ca5ebd6ac3c8122f7', '__omit_place_holder__e6a5a87842f9b59fa51b9a5ca5ebd6ac3c8122f7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-10 00:56:26.931120 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.931134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-10 00:56:26.931145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:56:26.931155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:56:26.931166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e6a5a87842f9b59fa51b9a5ca5ebd6ac3c8122f7', '__omit_place_holder__e6a5a87842f9b59fa51b9a5ca5ebd6ac3c8122f7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-10 00:56:26.931181 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.931191 | orchestrator | 2026-03-10 00:56:26.931201 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-10 00:56:26.931211 | orchestrator | Tuesday 10 March 2026 00:49:31 +0000 (0:00:01.831) 0:00:30.030 ********* 2026-03-10 00:56:26.931221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-10 00:56:26.931236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-10 00:56:26.931264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-10 00:56:26.931274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-10 00:56:26.931285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:56:26.931295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e6a5a87842f9b59fa51b9a5ca5ebd6ac3c8122f7', '__omit_place_holder__e6a5a87842f9b59fa51b9a5ca5ebd6ac3c8122f7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-10 00:56:26.931312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-10 00:56:26.931322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:56:26.931375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e6a5a87842f9b59fa51b9a5ca5ebd6ac3c8122f7', '__omit_place_holder__e6a5a87842f9b59fa51b9a5ca5ebd6ac3c8122f7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-10 00:56:26.931420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-10 00:56:26.931432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:56:26.931442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e6a5a87842f9b59fa51b9a5ca5ebd6ac3c8122f7', '__omit_place_holder__e6a5a87842f9b59fa51b9a5ca5ebd6ac3c8122f7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-10 00:56:26.931487 | orchestrator | 2026-03-10 00:56:26.931499 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-10 00:56:26.931509 | orchestrator | Tuesday 10 March 2026 00:49:35 +0000 (0:00:03.980) 0:00:34.010 ********* 2026-03-10 00:56:26.931519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-10 00:56:26.931529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-10 00:56:26.931549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-10 00:56:26.931564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-10 00:56:26.931575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-10 00:56:26.931585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-10 00:56:26.931602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-10 00:56:26.931612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-10 00:56:26.931622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-10 00:56:26.931655 | orchestrator | 2026-03-10 00:56:26.931666 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-10 00:56:26.931676 | orchestrator | Tuesday 10 March 2026 00:49:39 +0000 (0:00:03.722) 0:00:37.733 ********* 2026-03-10 00:56:26.931686 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-10 00:56:26.931704 | orchestrator | changed: [testbed-node-0] => (item=/ansible/r2026-03-10 00:56:26 | INFO  | Task bda68889-2ec7-4804-baae-2901576904b3 is in state SUCCESS 2026-03-10 00:56:26.931757 | orchestrator | oles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-10 00:56:26.931768 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-10 00:56:26.931806 | orchestrator | 2026-03-10 00:56:26.931817 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-10 00:56:26.931827 | orchestrator | Tuesday 10 March 2026 00:49:44 +0000 (0:00:05.420) 0:00:43.154 ********* 2026-03-10 00:56:26.931843 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-10 00:56:26.931854 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-10 00:56:26.931864 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-10 00:56:26.931901 | orchestrator | 2026-03-10 00:56:26.931911 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-10 00:56:26.931921 | orchestrator | Tuesday 10 March 2026 00:49:49 +0000 (0:00:04.703) 0:00:47.858 ********* 2026-03-10 00:56:26.931958 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.931976 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.931986 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.931996 | orchestrator | 2026-03-10 00:56:26.932006 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-10 00:56:26.932041 | orchestrator | Tuesday 10 March 2026 00:49:50 +0000 (0:00:01.306) 0:00:49.164 ********* 2026-03-10 00:56:26.932051 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-10 00:56:26.932061 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-10 00:56:26.932071 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-10 00:56:26.932080 | orchestrator | 2026-03-10 00:56:26.932090 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-10 00:56:26.932100 | orchestrator | Tuesday 10 March 2026 00:49:53 +0000 (0:00:03.169) 0:00:52.333 ********* 2026-03-10 00:56:26.932110 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-10 00:56:26.932120 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-10 00:56:26.932130 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-10 00:56:26.932139 | orchestrator | 2026-03-10 00:56:26.932149 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-10 00:56:26.932159 | orchestrator | Tuesday 10 March 2026 00:49:57 +0000 (0:00:03.795) 0:00:56.129 ********* 2026-03-10 00:56:26.932168 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-10 00:56:26.932178 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-10 00:56:26.932211 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-10 00:56:26.932221 | orchestrator | 2026-03-10 00:56:26.932231 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-10 00:56:26.932241 | orchestrator | Tuesday 10 March 2026 00:49:59 +0000 (0:00:01.986) 0:00:58.115 ********* 2026-03-10 00:56:26.932250 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-10 00:56:26.932260 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-10 00:56:26.932270 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-10 00:56:26.932280 | orchestrator | 2026-03-10 00:56:26.932289 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-10 00:56:26.932299 | orchestrator | Tuesday 10 March 2026 00:50:01 +0000 (0:00:02.146) 0:01:00.261 ********* 2026-03-10 00:56:26.932309 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:56:26.932319 | orchestrator | 2026-03-10 00:56:26.932328 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-03-10 00:56:26.932338 | orchestrator | Tuesday 10 March 2026 00:50:03 +0000 (0:00:01.364) 0:01:01.626 ********* 2026-03-10 00:56:26.932348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-10 00:56:26.932368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-10 00:56:26.932391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-10 00:56:26.932402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-10 00:56:26.932412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-10 00:56:26.932422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-10 00:56:26.932432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-10 00:56:26.932490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-10 00:56:26.932513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-10 00:56:26.932524 | orchestrator | 2026-03-10 00:56:26.932534 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-03-10 00:56:26.932544 | orchestrator | Tuesday 10 March 2026 00:50:07 +0000 (0:00:04.470) 0:01:06.096 ********* 2026-03-10 00:56:26.932554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-10 00:56:26.932564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:56:26.932575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:56:26.932585 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.932595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-10 00:56:26.932679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:56:26.932705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:56:26.932717 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.932733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-10 00:56:26.932743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:56:26.932754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:56:26.932764 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.932773 | orchestrator | 2026-03-10 00:56:26.932783 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-03-10 00:56:26.932793 | orchestrator | Tuesday 10 March 2026 00:50:08 +0000 (0:00:00.913) 0:01:07.010 ********* 2026-03-10 00:56:26.932803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-10 00:56:26.932813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:56:26.932838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-10 00:56:26.932853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:56:26.932863 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.932873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:56:26.932884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:56:26.932894 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.932903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-10 00:56:26.932914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:56:26.932931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:56:26.932941 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.932951 | orchestrator | 2026-03-10 00:56:26.932961 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-10 00:56:26.932971 | orchestrator | Tuesday 10 March 2026 00:50:09 +0000 (0:00:00.847) 0:01:07.858 ********* 2026-03-10 00:56:26.932991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-10 00:56:26.933038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:56:26.933049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:56:26.933066 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.933082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-10 00:56:26.933098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:56:26.933130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:56:26.933148 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.933174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-10 00:56:26.933198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:56:26.933216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:56:26.933234 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.933329 | orchestrator | 2026-03-10 00:56:26.933348 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-10 00:56:26.933366 | orchestrator | Tuesday 10 March 2026 00:50:10 +0000 (0:00:00.816) 0:01:08.674 ********* 2026-03-10 00:56:26.933386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-10 00:56:26.933404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:56:26.933431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:56:26.933442 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.933461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-10 00:56:26.933478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:56:26.933489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:56:26.933527 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.933539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-10 00:56:26.933549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:56:26.933566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:56:26.933576 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.933586 | orchestrator | 2026-03-10 00:56:26.933596 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-10 00:56:26.933605 | orchestrator | Tuesday 10 March 2026 00:50:10 +0000 (0:00:00.845) 0:01:09.519 ********* 2026-03-10 00:56:26.933616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-10 00:56:26.933692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:56:26.933712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:56:26.933723 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.933733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-10 00:56:26.933743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:56:26.933760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:56:26.933794 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.933805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-10 00:56:26.933822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:56:26.933838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:56:26.933873 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.933883 | orchestrator | 2026-03-10 00:56:26.933893 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-03-10 00:56:26.933903 | orchestrator | Tuesday 10 March 2026 00:50:11 +0000 (0:00:00.863) 0:01:10.383 ********* 2026-03-10 00:56:26.933913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-10 00:56:26.933930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:56:26.933940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:56:26.934012 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.934102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-10 00:56:26.934122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:56:26.935057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:56:26.935124 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.935144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-10 00:56:26.935154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:56:26.935178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:56:26.935183 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.935187 | orchestrator | 2026-03-10 00:56:26.935192 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-03-10 00:56:26.935197 | orchestrator | Tuesday 10 March 2026 00:50:12 +0000 (0:00:00.925) 0:01:11.308 ********* 2026-03-10 00:56:26.935202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-10 00:56:26.935207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:56:26.935221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:56:26.935226 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.935230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-10 00:56:26.935239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:56:26.935244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:56:26.935248 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.935252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-10 00:56:26.935275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:56:26.935280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:56:26.935284 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.935289 | orchestrator | 2026-03-10 00:56:26.935293 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-03-10 00:56:26.935301 | orchestrator | Tuesday 10 March 2026 00:50:13 +0000 (0:00:00.794) 0:01:12.103 ********* 2026-03-10 00:56:26.935308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-10 00:56:26.935318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:56:26.935322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:56:26.935329 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.935336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-10 00:56:26.935343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:56:26.935349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:56:26.935356 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.935367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-10 00:56:26.935376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:56:26.935390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:56:26.935396 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.935402 | orchestrator | 2026-03-10 00:56:26.935408 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-10 00:56:26.935415 | orchestrator | Tuesday 10 March 2026 00:50:14 +0000 (0:00:00.805) 0:01:12.908 ********* 2026-03-10 00:56:26.935421 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-10 00:56:26.935428 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-10 00:56:26.935435 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-10 00:56:26.935441 | orchestrator | 2026-03-10 00:56:26.935447 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-10 00:56:26.935453 | orchestrator | Tuesday 10 March 2026 00:50:16 +0000 (0:00:01.826) 0:01:14.735 ********* 2026-03-10 00:56:26.935461 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-10 00:56:26.935467 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-10 00:56:26.935474 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-10 00:56:26.935480 | orchestrator | 2026-03-10 00:56:26.935487 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-10 00:56:26.935493 | orchestrator | Tuesday 10 March 2026 00:50:17 +0000 (0:00:01.497) 0:01:16.232 ********* 2026-03-10 00:56:26.935500 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-10 00:56:26.935506 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-10 00:56:26.935512 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-10 00:56:26.935518 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-10 00:56:26.935524 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.935531 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-10 00:56:26.935537 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.935544 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-10 00:56:26.935551 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.935558 | orchestrator | 2026-03-10 00:56:26.935564 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-03-10 00:56:26.935570 | orchestrator | Tuesday 10 March 2026 00:50:19 +0000 (0:00:01.511) 0:01:17.744 ********* 2026-03-10 00:56:26.935588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-10 00:56:26.935600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-10 00:56:26.935608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-10 00:56:26.935614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-10 00:56:26.935621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-10 00:56:26.935652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-10 00:56:26.935665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-10 00:56:26.935678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-10 00:56:26.935690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-10 00:56:26.935698 | orchestrator | 2026-03-10 00:56:26.935705 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-10 00:56:26.935712 | orchestrator | Tuesday 10 March 2026 00:50:22 +0000 (0:00:02.860) 0:01:20.604 ********* 2026-03-10 00:56:26.935718 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:56:26.935725 | orchestrator | 2026-03-10 00:56:26.935733 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-10 00:56:26.935739 | orchestrator | Tuesday 10 March 2026 00:50:23 +0000 (0:00:00.923) 0:01:21.527 ********* 2026-03-10 00:56:26.935748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-10 00:56:26.935757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-10 00:56:26.935764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-10 00:56:26.935782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.935792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.935796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-10 00:56:26.935801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.935805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.935810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-10 00:56:26.935817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-10 00:56:26.935829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.935834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.935838 | orchestrator | 2026-03-10 00:56:26.935843 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-10 00:56:26.935847 | orchestrator | Tuesday 10 March 2026 00:50:29 +0000 (0:00:06.391) 0:01:27.919 ********* 2026-03-10 00:56:26.935851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-10 00:56:26.935856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-10 00:56:26.935864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.935871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.935877 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.935892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-10 00:56:26.935900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-10 00:56:26.935907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.935914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.935928 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.935935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-10 00:56:26.935942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-10 00:56:26.935956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.935964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.935971 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.935977 | orchestrator | 2026-03-10 00:56:26.935984 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-10 00:56:26.935991 | orchestrator | Tuesday 10 March 2026 00:50:31 +0000 (0:00:01.888) 0:01:29.807 ********* 2026-03-10 00:56:26.935998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-10 00:56:26.936007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-10 00:56:26.936014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-10 00:56:26.936022 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.936029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-10 00:56:26.936041 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.936050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-10 00:56:26.936055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-10 00:56:26.936059 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.936063 | orchestrator | 2026-03-10 00:56:26.936067 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-10 00:56:26.936072 | orchestrator | Tuesday 10 March 2026 00:50:32 +0000 (0:00:01.342) 0:01:31.150 ********* 2026-03-10 00:56:26.936076 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.936083 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.936089 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.936096 | orchestrator | 2026-03-10 00:56:26.936103 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-10 00:56:26.936110 | orchestrator | Tuesday 10 March 2026 00:50:34 +0000 (0:00:01.655) 0:01:32.806 ********* 2026-03-10 00:56:26.936117 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.936124 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.936131 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.936138 | orchestrator | 2026-03-10 00:56:26.936145 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-10 00:56:26.936152 | orchestrator | Tuesday 10 March 2026 00:50:37 +0000 (0:00:02.812) 0:01:35.618 ********* 2026-03-10 00:56:26.936159 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:56:26.936167 | orchestrator | 2026-03-10 00:56:26.936173 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-10 00:56:26.936180 | orchestrator | Tuesday 10 March 2026 00:50:38 +0000 (0:00:01.039) 0:01:36.657 ********* 2026-03-10 00:56:26.936191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-10 00:56:26.936201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.936206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.936214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-10 00:56:26.936219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.936224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.936234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-10 00:56:26.936239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.936248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.936252 | orchestrator | 2026-03-10 00:56:26.936256 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-10 00:56:26.936261 | orchestrator | Tuesday 10 March 2026 00:50:42 +0000 (0:00:04.342) 0:01:41.000 ********* 2026-03-10 00:56:26.936265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-10 00:56:26.936270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.936279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.936283 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.936290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-10 00:56:26.936298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.936303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-10 00:56:26.936308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.936312 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.936320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.936328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.936337 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.936341 | orchestrator | 2026-03-10 00:56:26.936345 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-10 00:56:26.936350 | orchestrator | Tuesday 10 March 2026 00:50:43 +0000 (0:00:00.847) 0:01:41.847 ********* 2026-03-10 00:56:26.936355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-10 00:56:26.936359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-10 00:56:26.936364 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.936368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-10 00:56:26.936373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-10 00:56:26.936378 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.936382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-10 00:56:26.936387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-10 00:56:26.936391 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.936395 | orchestrator | 2026-03-10 00:56:26.936399 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-10 00:56:26.936403 | orchestrator | Tuesday 10 March 2026 00:50:44 +0000 (0:00:01.276) 0:01:43.123 ********* 2026-03-10 00:56:26.936407 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.936412 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.936416 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.936420 | orchestrator | 2026-03-10 00:56:26.936425 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-10 00:56:26.936429 | orchestrator | Tuesday 10 March 2026 00:50:46 +0000 (0:00:01.842) 0:01:44.965 ********* 2026-03-10 00:56:26.936434 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.936438 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.936442 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.936447 | orchestrator | 2026-03-10 00:56:26.936451 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-10 00:56:26.936456 | orchestrator | Tuesday 10 March 2026 00:50:48 +0000 (0:00:02.254) 0:01:47.220 ********* 2026-03-10 00:56:26.936460 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.936464 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.936468 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.936472 | orchestrator | 2026-03-10 00:56:26.936476 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-10 00:56:26.936480 | orchestrator | Tuesday 10 March 2026 00:50:48 +0000 (0:00:00.288) 0:01:47.509 ********* 2026-03-10 00:56:26.936485 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:56:26.936489 | orchestrator | 2026-03-10 00:56:26.936493 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-10 00:56:26.936498 | orchestrator | Tuesday 10 March 2026 00:50:49 +0000 (0:00:00.807) 0:01:48.316 ********* 2026-03-10 00:56:26.936507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-10 00:56:26.936518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-10 00:56:26.936523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-10 00:56:26.936528 | orchestrator | 2026-03-10 00:56:26.936532 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-10 00:56:26.936536 | orchestrator | Tuesday 10 March 2026 00:50:53 +0000 (0:00:03.304) 0:01:51.621 ********* 2026-03-10 00:56:26.936541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-10 00:56:26.936546 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.936550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-10 00:56:26.936558 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.936569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-10 00:56:26.936574 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.936579 | orchestrator | 2026-03-10 00:56:26.936583 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-10 00:56:26.936587 | orchestrator | Tuesday 10 March 2026 00:50:54 +0000 (0:00:01.637) 0:01:53.259 ********* 2026-03-10 00:56:26.936593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-10 00:56:26.936599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-10 00:56:26.936603 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.936608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-10 00:56:26.936612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-10 00:56:26.936616 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.936621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-10 00:56:26.936626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-10 00:56:26.936659 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.936666 | orchestrator | 2026-03-10 00:56:26.936673 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-10 00:56:26.936681 | orchestrator | Tuesday 10 March 2026 00:50:57 +0000 (0:00:02.347) 0:01:55.607 ********* 2026-03-10 00:56:26.936685 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.936690 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.936694 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.936698 | orchestrator | 2026-03-10 00:56:26.936702 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-10 00:56:26.936706 | orchestrator | Tuesday 10 March 2026 00:50:57 +0000 (0:00:00.702) 0:01:56.309 ********* 2026-03-10 00:56:26.936710 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.936714 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.936719 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.936723 | orchestrator | 2026-03-10 00:56:26.936727 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-10 00:56:26.937073 | orchestrator | Tuesday 10 March 2026 00:50:59 +0000 (0:00:01.956) 0:01:58.266 ********* 2026-03-10 00:56:26.937095 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:56:26.937103 | orchestrator | 2026-03-10 00:56:26.937110 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-10 00:56:26.937116 | orchestrator | Tuesday 10 March 2026 00:51:01 +0000 (0:00:01.526) 0:01:59.792 ********* 2026-03-10 00:56:26.937129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-10 00:56:26.937139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-10 00:56:26.937184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-10 00:56:26.937205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937224 | orchestrator | 2026-03-10 00:56:26.937228 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-10 00:56:26.937233 | orchestrator | Tuesday 10 March 2026 00:51:07 +0000 (0:00:06.315) 0:02:06.107 ********* 2026-03-10 00:56:26.937237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-10 00:56:26.937245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937261 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.937271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-10 00:56:26.937276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937293 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.937297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-10 00:56:26.937308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937325 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.937329 | orchestrator | 2026-03-10 00:56:26.937333 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-10 00:56:26.937338 | orchestrator | Tuesday 10 March 2026 00:51:09 +0000 (0:00:01.550) 0:02:07.657 ********* 2026-03-10 00:56:26.937343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-10 00:56:26.937347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-10 00:56:26.937352 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.937356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-10 00:56:26.937360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-10 00:56:26.937365 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.937369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-10 00:56:26.937373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-10 00:56:26.937377 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.937381 | orchestrator | 2026-03-10 00:56:26.937386 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-10 00:56:26.937390 | orchestrator | Tuesday 10 March 2026 00:51:10 +0000 (0:00:01.494) 0:02:09.152 ********* 2026-03-10 00:56:26.937394 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.937398 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.937402 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.937406 | orchestrator | 2026-03-10 00:56:26.937411 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-10 00:56:26.937415 | orchestrator | Tuesday 10 March 2026 00:51:12 +0000 (0:00:01.441) 0:02:10.594 ********* 2026-03-10 00:56:26.937419 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.937423 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.937427 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.937431 | orchestrator | 2026-03-10 00:56:26.937438 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-10 00:56:26.937443 | orchestrator | Tuesday 10 March 2026 00:51:14 +0000 (0:00:02.251) 0:02:12.845 ********* 2026-03-10 00:56:26.937447 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.937451 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.937455 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.937459 | orchestrator | 2026-03-10 00:56:26.937463 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-10 00:56:26.937470 | orchestrator | Tuesday 10 March 2026 00:51:15 +0000 (0:00:00.686) 0:02:13.532 ********* 2026-03-10 00:56:26.937475 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.937479 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.937487 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.937491 | orchestrator | 2026-03-10 00:56:26.937496 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-10 00:56:26.937500 | orchestrator | Tuesday 10 March 2026 00:51:15 +0000 (0:00:00.624) 0:02:14.156 ********* 2026-03-10 00:56:26.937504 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:56:26.937508 | orchestrator | 2026-03-10 00:56:26.937512 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-10 00:56:26.937516 | orchestrator | Tuesday 10 March 2026 00:51:16 +0000 (0:00:00.974) 0:02:15.131 ********* 2026-03-10 00:56:26.937521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-10 00:56:26.937526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-10 00:56:26.937530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-10 00:56:26.937579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-10 00:56:26.937583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-10 00:56:26.937618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-10 00:56:26.937623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937705 | orchestrator | 2026-03-10 00:56:26.937710 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-10 00:56:26.937715 | orchestrator | Tuesday 10 March 2026 00:51:22 +0000 (0:00:06.111) 0:02:21.242 ********* 2026-03-10 00:56:26.937720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-10 00:56:26.937728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-10 00:56:26.937739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937765 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.937770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-10 00:56:26.937786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-10 00:56:26.937792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937822 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.937833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-10 00:56:26.937838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-10 00:56:26.937844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.937876 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.937881 | orchestrator | 2026-03-10 00:56:26.937886 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-10 00:56:26.937891 | orchestrator | Tuesday 10 March 2026 00:51:24 +0000 (0:00:02.067) 0:02:23.310 ********* 2026-03-10 00:56:26.937899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-10 00:56:26.937905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-10 00:56:26.937910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-10 00:56:26.937915 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.937920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-10 00:56:26.937925 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.937930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-10 00:56:26.937936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-10 00:56:26.937941 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.937946 | orchestrator | 2026-03-10 00:56:26.937951 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-10 00:56:26.937956 | orchestrator | Tuesday 10 March 2026 00:51:27 +0000 (0:00:02.367) 0:02:25.678 ********* 2026-03-10 00:56:26.937961 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.937966 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.937971 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.937976 | orchestrator | 2026-03-10 00:56:26.937981 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-10 00:56:26.937986 | orchestrator | Tuesday 10 March 2026 00:51:29 +0000 (0:00:02.599) 0:02:28.278 ********* 2026-03-10 00:56:26.937990 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.937996 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.938001 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.938006 | orchestrator | 2026-03-10 00:56:26.938011 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-10 00:56:26.938039 | orchestrator | Tuesday 10 March 2026 00:51:32 +0000 (0:00:02.452) 0:02:30.730 ********* 2026-03-10 00:56:26.938047 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.938052 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.938057 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.938062 | orchestrator | 2026-03-10 00:56:26.938067 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-10 00:56:26.938071 | orchestrator | Tuesday 10 March 2026 00:51:32 +0000 (0:00:00.592) 0:02:31.323 ********* 2026-03-10 00:56:26.938076 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:56:26.938084 | orchestrator | 2026-03-10 00:56:26.938091 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-10 00:56:26.938098 | orchestrator | Tuesday 10 March 2026 00:51:33 +0000 (0:00:00.975) 0:02:32.298 ********* 2026-03-10 00:56:26.938126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-10 00:56:26.938139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-10 00:56:26.938155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-10 00:56:26.938169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-10 00:56:26.938179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-10 00:56:26.938190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-10 00:56:26.938195 | orchestrator | 2026-03-10 00:56:26.938199 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-10 00:56:26.938203 | orchestrator | Tuesday 10 March 2026 00:51:40 +0000 (0:00:06.525) 0:02:38.824 ********* 2026-03-10 00:56:26.938208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-10 00:56:26.938223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-10 00:56:26.938228 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.938233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-10 00:56:26.938252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-10 00:56:26.938257 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.938262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-10 00:56:26.938274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-10 00:56:26.938279 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.938283 | orchestrator | 2026-03-10 00:56:26.938288 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-10 00:56:26.938292 | orchestrator | Tuesday 10 March 2026 00:51:44 +0000 (0:00:03.798) 0:02:42.622 ********* 2026-03-10 00:56:26.938298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-10 00:56:26.938303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-10 00:56:26.938312 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.938317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-10 00:56:26.938321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-10 00:56:26.938326 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.938330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-10 00:56:26.938335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-10 00:56:26.938339 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.938343 | orchestrator | 2026-03-10 00:56:26.938347 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-10 00:56:26.938351 | orchestrator | Tuesday 10 March 2026 00:51:48 +0000 (0:00:04.060) 0:02:46.683 ********* 2026-03-10 00:56:26.938355 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.938359 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.938363 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.938367 | orchestrator | 2026-03-10 00:56:26.938372 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-10 00:56:26.938376 | orchestrator | Tuesday 10 March 2026 00:51:49 +0000 (0:00:01.353) 0:02:48.036 ********* 2026-03-10 00:56:26.938380 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.938384 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.938388 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.938392 | orchestrator | 2026-03-10 00:56:26.938399 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-10 00:56:26.938403 | orchestrator | Tuesday 10 March 2026 00:51:52 +0000 (0:00:02.574) 0:02:50.611 ********* 2026-03-10 00:56:26.938407 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.938412 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.938416 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.938420 | orchestrator | 2026-03-10 00:56:26.938424 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-10 00:56:26.938431 | orchestrator | Tuesday 10 March 2026 00:51:52 +0000 (0:00:00.900) 0:02:51.511 ********* 2026-03-10 00:56:26.938439 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:56:26.938443 | orchestrator | 2026-03-10 00:56:26.938447 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-10 00:56:26.938451 | orchestrator | Tuesday 10 March 2026 00:51:54 +0000 (0:00:01.023) 0:02:52.534 ********* 2026-03-10 00:56:26.938456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-10 00:56:26.938461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-10 00:56:26.938465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-10 00:56:26.938469 | orchestrator | 2026-03-10 00:56:26.938473 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-10 00:56:26.938477 | orchestrator | Tuesday 10 March 2026 00:51:59 +0000 (0:00:05.343) 0:02:57.878 ********* 2026-03-10 00:56:26.938482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-10 00:56:26.938489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-10 00:56:26.938496 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.938501 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.938508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-10 00:56:26.938512 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.938516 | orchestrator | 2026-03-10 00:56:26.938521 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-10 00:56:26.938525 | orchestrator | Tuesday 10 March 2026 00:52:00 +0000 (0:00:00.859) 0:02:58.737 ********* 2026-03-10 00:56:26.938529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-10 00:56:26.938533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-10 00:56:26.938537 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.938541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-10 00:56:26.938546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-10 00:56:26.938550 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.938555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-10 00:56:26.938559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-10 00:56:26.938563 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.938567 | orchestrator | 2026-03-10 00:56:26.938571 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-10 00:56:26.938575 | orchestrator | Tuesday 10 March 2026 00:52:00 +0000 (0:00:00.730) 0:02:59.468 ********* 2026-03-10 00:56:26.938579 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.938584 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.938588 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.938592 | orchestrator | 2026-03-10 00:56:26.938596 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-10 00:56:26.938600 | orchestrator | Tuesday 10 March 2026 00:52:02 +0000 (0:00:01.396) 0:03:00.864 ********* 2026-03-10 00:56:26.938604 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.938608 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.938612 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.938616 | orchestrator | 2026-03-10 00:56:26.938620 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-10 00:56:26.938624 | orchestrator | Tuesday 10 March 2026 00:52:04 +0000 (0:00:02.327) 0:03:03.194 ********* 2026-03-10 00:56:26.938673 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.938680 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.938692 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.938696 | orchestrator | 2026-03-10 00:56:26.938701 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-10 00:56:26.938705 | orchestrator | Tuesday 10 March 2026 00:52:05 +0000 (0:00:00.737) 0:03:03.932 ********* 2026-03-10 00:56:26.938709 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:56:26.938713 | orchestrator | 2026-03-10 00:56:26.938717 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-10 00:56:26.938722 | orchestrator | Tuesday 10 March 2026 00:52:06 +0000 (0:00:01.121) 0:03:05.053 ********* 2026-03-10 00:56:26.938735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-10 00:56:26.938741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-10 00:56:26.938759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-10 00:56:26.938764 | orchestrator | 2026-03-10 00:56:26.938769 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-10 00:56:26.938773 | orchestrator | Tuesday 10 March 2026 00:52:10 +0000 (0:00:04.038) 0:03:09.092 ********* 2026-03-10 00:56:26.938781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-10 00:56:26.938790 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.938798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-10 00:56:26.938803 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.938814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-10 00:56:26.938823 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.938828 | orchestrator | 2026-03-10 00:56:26.938832 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-10 00:56:26.938837 | orchestrator | Tuesday 10 March 2026 00:52:12 +0000 (0:00:01.490) 0:03:10.582 ********* 2026-03-10 00:56:26.938842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-10 00:56:26.938847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-10 00:56:26.938853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-10 00:56:26.938858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-10 00:56:26.938863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-10 00:56:26.938872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-10 00:56:26.938877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-10 00:56:26.938881 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.938886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-10 00:56:26.938890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-10 00:56:26.938894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-10 00:56:26.938899 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.938906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-10 00:56:26.938915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-10 00:56:26.938919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-10 00:56:26.938924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-10 00:56:26.938928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-10 00:56:26.938933 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.938937 | orchestrator | 2026-03-10 00:56:26.938941 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-10 00:56:26.938946 | orchestrator | Tuesday 10 March 2026 00:52:13 +0000 (0:00:01.253) 0:03:11.835 ********* 2026-03-10 00:56:26.938950 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.938955 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.938959 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.938963 | orchestrator | 2026-03-10 00:56:26.938968 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-10 00:56:26.938972 | orchestrator | Tuesday 10 March 2026 00:52:14 +0000 (0:00:01.284) 0:03:13.120 ********* 2026-03-10 00:56:26.938982 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.938987 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.938991 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.938996 | orchestrator | 2026-03-10 00:56:26.939000 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-10 00:56:26.939005 | orchestrator | Tuesday 10 March 2026 00:52:16 +0000 (0:00:02.128) 0:03:15.248 ********* 2026-03-10 00:56:26.939009 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.939013 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.939018 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.939022 | orchestrator | 2026-03-10 00:56:26.939026 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-10 00:56:26.939031 | orchestrator | Tuesday 10 March 2026 00:52:17 +0000 (0:00:00.346) 0:03:15.594 ********* 2026-03-10 00:56:26.939035 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.939039 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.939044 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.939048 | orchestrator | 2026-03-10 00:56:26.939052 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-10 00:56:26.939057 | orchestrator | Tuesday 10 March 2026 00:52:17 +0000 (0:00:00.586) 0:03:16.181 ********* 2026-03-10 00:56:26.939061 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:56:26.939066 | orchestrator | 2026-03-10 00:56:26.939070 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-10 00:56:26.939075 | orchestrator | Tuesday 10 March 2026 00:52:18 +0000 (0:00:01.041) 0:03:17.222 ********* 2026-03-10 00:56:26.939082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-10 00:56:26.939098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-10 00:56:26.939107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-10 00:56:26.939119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-10 00:56:26.939127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-10 00:56:26.939135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-10 00:56:26.939145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-10 00:56:26.939154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-10 00:56:26.939162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-10 00:56:26.939167 | orchestrator | 2026-03-10 00:56:26.939171 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-10 00:56:26.939175 | orchestrator | Tuesday 10 March 2026 00:52:23 +0000 (0:00:04.720) 0:03:21.942 ********* 2026-03-10 00:56:26.939180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-10 00:56:26.939185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-10 00:56:26.939190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-10 00:56:26.939194 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.939205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-10 00:56:26.939213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-10 00:56:26.939218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-10 00:56:26.939223 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.939228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-10 00:56:26.939233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-10 00:56:26.939241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.2026-03-10 00:56:26 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:56:26.939246 | orchestrator | 2026-03-10 00:56:26 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:26.939251 | orchestrator | osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-10 00:56:26.939259 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.939264 | orchestrator | 2026-03-10 00:56:26.939268 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-10 00:56:26.939273 | orchestrator | Tuesday 10 March 2026 00:52:24 +0000 (0:00:00.718) 0:03:22.661 ********* 2026-03-10 00:56:26.939277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-10 00:56:26.939282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-10 00:56:26.939287 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.939304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-10 00:56:26.939309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-10 00:56:26.939314 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.939318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-10 00:56:26.939323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-10 00:56:26.939327 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.939332 | orchestrator | 2026-03-10 00:56:26.939336 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-10 00:56:26.939341 | orchestrator | Tuesday 10 March 2026 00:52:25 +0000 (0:00:01.590) 0:03:24.251 ********* 2026-03-10 00:56:26.939345 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.939349 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.939354 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.939358 | orchestrator | 2026-03-10 00:56:26.939362 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-10 00:56:26.939367 | orchestrator | Tuesday 10 March 2026 00:52:27 +0000 (0:00:01.367) 0:03:25.619 ********* 2026-03-10 00:56:26.939371 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.939376 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.939380 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.939384 | orchestrator | 2026-03-10 00:56:26.939389 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-10 00:56:26.939393 | orchestrator | Tuesday 10 March 2026 00:52:29 +0000 (0:00:02.417) 0:03:28.036 ********* 2026-03-10 00:56:26.939398 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.939402 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.939406 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.939411 | orchestrator | 2026-03-10 00:56:26.939415 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-10 00:56:26.939419 | orchestrator | Tuesday 10 March 2026 00:52:30 +0000 (0:00:00.617) 0:03:28.654 ********* 2026-03-10 00:56:26.939428 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:56:26.939432 | orchestrator | 2026-03-10 00:56:26.939436 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-10 00:56:26.939441 | orchestrator | Tuesday 10 March 2026 00:52:31 +0000 (0:00:01.154) 0:03:29.808 ********* 2026-03-10 00:56:26.939451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 00:56:26.939457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.939462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 00:56:26.939466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.939471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 00:56:26.939484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.939489 | orchestrator | 2026-03-10 00:56:26.939494 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-10 00:56:26.939498 | orchestrator | Tuesday 10 March 2026 00:52:35 +0000 (0:00:03.845) 0:03:33.654 ********* 2026-03-10 00:56:26.939503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-10 00:56:26.939507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.939512 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.939517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-10 00:56:26.939528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.939533 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.939704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-10 00:56:26.939715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.939719 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.939724 | orchestrator | 2026-03-10 00:56:26.939728 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-10 00:56:26.939733 | orchestrator | Tuesday 10 March 2026 00:52:36 +0000 (0:00:01.067) 0:03:34.722 ********* 2026-03-10 00:56:26.939737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-10 00:56:26.939742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-10 00:56:26.939747 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.939751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-10 00:56:26.939755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-10 00:56:26.939764 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.939769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-10 00:56:26.939773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-10 00:56:26.939778 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.939782 | orchestrator | 2026-03-10 00:56:26.939786 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-10 00:56:26.939791 | orchestrator | Tuesday 10 March 2026 00:52:37 +0000 (0:00:00.985) 0:03:35.707 ********* 2026-03-10 00:56:26.939795 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.939800 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.939804 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.939808 | orchestrator | 2026-03-10 00:56:26.939813 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-10 00:56:26.939817 | orchestrator | Tuesday 10 March 2026 00:52:38 +0000 (0:00:01.416) 0:03:37.123 ********* 2026-03-10 00:56:26.939821 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.939826 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.939830 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.939834 | orchestrator | 2026-03-10 00:56:26.939839 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-10 00:56:26.939843 | orchestrator | Tuesday 10 March 2026 00:52:40 +0000 (0:00:02.296) 0:03:39.420 ********* 2026-03-10 00:56:26.939847 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:56:26.939852 | orchestrator | 2026-03-10 00:56:26.939857 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-10 00:56:26.939861 | orchestrator | Tuesday 10 March 2026 00:52:42 +0000 (0:00:01.582) 0:03:41.003 ********* 2026-03-10 00:56:26.939872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-10 00:56:26.939878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.939882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.939890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.939895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-10 00:56:26.939902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.939909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.939914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.939918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-10 00:56:26.939926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.939931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.939938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.939942 | orchestrator | 2026-03-10 00:56:26.939947 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-10 00:56:26.939951 | orchestrator | Tuesday 10 March 2026 00:52:46 +0000 (0:00:03.862) 0:03:44.865 ********* 2026-03-10 00:56:26.939959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-10 00:56:26.939963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.939971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.939975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.939980 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.939984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-10 00:56:26.939994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.939999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.940003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.940010 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.940015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-10 00:56:26.940020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.940024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.940033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.940038 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.940043 | orchestrator | 2026-03-10 00:56:26.940047 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-10 00:56:26.940052 | orchestrator | Tuesday 10 March 2026 00:52:47 +0000 (0:00:00.692) 0:03:45.558 ********* 2026-03-10 00:56:26.940056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-10 00:56:26.940060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-10 00:56:26.940068 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.940073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-10 00:56:26.940079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-10 00:56:26.940086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-10 00:56:26.940094 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.940101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-10 00:56:26.940108 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.940115 | orchestrator | 2026-03-10 00:56:26.940122 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-10 00:56:26.940129 | orchestrator | Tuesday 10 March 2026 00:52:48 +0000 (0:00:01.502) 0:03:47.061 ********* 2026-03-10 00:56:26.940137 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.940145 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.940152 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.940159 | orchestrator | 2026-03-10 00:56:26.940168 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-10 00:56:26.940175 | orchestrator | Tuesday 10 March 2026 00:52:49 +0000 (0:00:01.357) 0:03:48.418 ********* 2026-03-10 00:56:26.940182 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.940189 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.940193 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.940197 | orchestrator | 2026-03-10 00:56:26.940202 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-10 00:56:26.940206 | orchestrator | Tuesday 10 March 2026 00:52:52 +0000 (0:00:02.451) 0:03:50.870 ********* 2026-03-10 00:56:26.940210 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:56:26.940215 | orchestrator | 2026-03-10 00:56:26.940219 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-10 00:56:26.940223 | orchestrator | Tuesday 10 March 2026 00:52:53 +0000 (0:00:01.506) 0:03:52.376 ********* 2026-03-10 00:56:26.940228 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-10 00:56:26.940232 | orchestrator | 2026-03-10 00:56:26.940237 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-10 00:56:26.940241 | orchestrator | Tuesday 10 March 2026 00:52:56 +0000 (0:00:02.984) 0:03:55.361 ********* 2026-03-10 00:56:26.940254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-10 00:56:26.940267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-10 00:56:26.940273 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.940282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-10 00:56:26.940288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-10 00:56:26.940297 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.940308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-10 00:56:26.940314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-10 00:56:26.940319 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.940323 | orchestrator | 2026-03-10 00:56:26.940327 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-10 00:56:26.940332 | orchestrator | Tuesday 10 March 2026 00:52:58 +0000 (0:00:01.950) 0:03:57.312 ********* 2026-03-10 00:56:26.940342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-10 00:56:26.940352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-10 00:56:26.940357 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.940363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-10 00:56:26.940368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-10 00:56:26.940373 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.940384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-10 00:56:26.940394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-10 00:56:26.940399 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.940404 | orchestrator | 2026-03-10 00:56:26.940409 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-10 00:56:26.940414 | orchestrator | Tuesday 10 March 2026 00:53:01 +0000 (0:00:02.533) 0:03:59.846 ********* 2026-03-10 00:56:26.940419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-10 00:56:26.940425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-10 00:56:26.940430 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.940439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-10 00:56:26.940449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-10 00:56:26.940455 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.940461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-10 00:56:26.940466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-10 00:56:26.940471 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.940476 | orchestrator | 2026-03-10 00:56:26.940481 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-10 00:56:26.940486 | orchestrator | Tuesday 10 March 2026 00:53:04 +0000 (0:00:02.983) 0:04:02.830 ********* 2026-03-10 00:56:26.940491 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.940496 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.940501 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.940506 | orchestrator | 2026-03-10 00:56:26.940511 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-10 00:56:26.940516 | orchestrator | Tuesday 10 March 2026 00:53:06 +0000 (0:00:01.989) 0:04:04.819 ********* 2026-03-10 00:56:26.940523 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.940530 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.940537 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.940543 | orchestrator | 2026-03-10 00:56:26.940549 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-10 00:56:26.940557 | orchestrator | Tuesday 10 March 2026 00:53:08 +0000 (0:00:01.718) 0:04:06.537 ********* 2026-03-10 00:56:26.940565 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.940572 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.940579 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.940587 | orchestrator | 2026-03-10 00:56:26.940592 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-10 00:56:26.940601 | orchestrator | Tuesday 10 March 2026 00:53:08 +0000 (0:00:00.364) 0:04:06.902 ********* 2026-03-10 00:56:26.940606 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:56:26.940611 | orchestrator | 2026-03-10 00:56:26.940616 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-10 00:56:26.940621 | orchestrator | Tuesday 10 March 2026 00:53:09 +0000 (0:00:01.540) 0:04:08.442 ********* 2026-03-10 00:56:26.940665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-10 00:56:26.940680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-10 00:56:26.940686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-10 00:56:26.940691 | orchestrator | 2026-03-10 00:56:26.940696 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-10 00:56:26.940701 | orchestrator | Tuesday 10 March 2026 00:53:11 +0000 (0:00:01.629) 0:04:10.072 ********* 2026-03-10 00:56:26.940706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-10 00:56:26.940712 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.940717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-10 00:56:26.940726 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.940731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-10 00:56:26.940735 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.940740 | orchestrator | 2026-03-10 00:56:26.940744 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-10 00:56:26.940748 | orchestrator | Tuesday 10 March 2026 00:53:12 +0000 (0:00:00.509) 0:04:10.582 ********* 2026-03-10 00:56:26.940754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-10 00:56:26.940763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-10 00:56:26.940768 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.940773 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.940777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-10 00:56:26.940782 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.940786 | orchestrator | 2026-03-10 00:56:26.940790 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-10 00:56:26.940795 | orchestrator | Tuesday 10 March 2026 00:53:12 +0000 (0:00:00.920) 0:04:11.503 ********* 2026-03-10 00:56:26.940799 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.940803 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.940808 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.940812 | orchestrator | 2026-03-10 00:56:26.940816 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-10 00:56:26.940821 | orchestrator | Tuesday 10 March 2026 00:53:13 +0000 (0:00:00.465) 0:04:11.968 ********* 2026-03-10 00:56:26.940825 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.940829 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.940833 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.940838 | orchestrator | 2026-03-10 00:56:26.940842 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-10 00:56:26.940847 | orchestrator | Tuesday 10 March 2026 00:53:14 +0000 (0:00:01.431) 0:04:13.399 ********* 2026-03-10 00:56:26.940854 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.940859 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.940864 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.940868 | orchestrator | 2026-03-10 00:56:26.940872 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-10 00:56:26.940877 | orchestrator | Tuesday 10 March 2026 00:53:15 +0000 (0:00:00.327) 0:04:13.727 ********* 2026-03-10 00:56:26.940881 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:56:26.940885 | orchestrator | 2026-03-10 00:56:26.940889 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-10 00:56:26.940894 | orchestrator | Tuesday 10 March 2026 00:53:16 +0000 (0:00:01.637) 0:04:15.364 ********* 2026-03-10 00:56:26.940899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 00:56:26.940904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.940913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.940918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.940927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-10 00:56:26.940931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.940937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-10 00:56:26.940942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-10 00:56:26.940946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.940958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 00:56:26.940971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.940979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-10 00:56:26.940987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-10 00:56:26.940995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.941007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-10 00:56:26.941018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-10 00:56:26.941030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 00:56:26.941039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 00:56:26.941044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.941049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.941246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.941264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.941269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.941274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-10 00:56:26.941278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.941297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.941306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-10 00:56:26.941311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-10 00:56:26.941316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.941320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-10 00:56:26.941325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-10 00:56:26.941330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.941347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-10 00:56:26.941355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 00:56:26.941360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.941365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.941370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 00:56:26.941375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-10 00:56:26.941391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.941400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-10 00:56:26.941404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-10 00:56:26.941409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.941414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-10 00:56:26.941418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-10 00:56:26.941433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.941445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-10 00:56:26.941450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-10 00:56:26.941455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-10 00:56:26.941460 | orchestrator | 2026-03-10 00:56:26.941465 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-10 00:56:26.941470 | orchestrator | Tuesday 10 March 2026 00:53:21 +0000 (0:00:04.637) 0:04:20.002 ********* 2026-03-10 00:56:26.941475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 00:56:26.941499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.941505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.941509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.941514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-10 00:56:26.941519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.941524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-10 00:56:26.941534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-10 00:56:26.941542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.941547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 00:56:26.941552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 00:56:26.941556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.941561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.941573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-10 00:56:26.941578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.941652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-10 00:56:26.941666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.941672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.941677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-10 00:56:26.941693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-10 00:56:26.941699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.941704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-10 00:56:26.941710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-10 00:56:26.941715 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.941720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-10 00:56:26.941728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.941739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 00:56:26.941744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 00:56:26.941749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.941754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.941762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.941767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-10 00:56:26.941777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.941782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-10 00:56:26.941787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-10 00:56:26.941792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.941800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.941811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-10 00:56:26.941816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-10 00:56:26.941821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-10 00:56:26.941826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-10 00:56:26.941830 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.941835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.941844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 00:56:26.941848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.941859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-10 00:56:26.941865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-10 00:56:26.941870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.941876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-10 00:56:26.941885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-10 00:56:26.941890 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.941895 | orchestrator | 2026-03-10 00:56:26.941900 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-10 00:56:26.941905 | orchestrator | Tuesday 10 March 2026 00:53:23 +0000 (0:00:01.755) 0:04:21.757 ********* 2026-03-10 00:56:26.941911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-10 00:56:26.941917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-10 00:56:26.941922 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.941930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-10 00:56:26.941938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-10 00:56:26.941944 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.941949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-10 00:56:26.941954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-10 00:56:26.941960 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.941965 | orchestrator | 2026-03-10 00:56:26.941970 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-10 00:56:26.941976 | orchestrator | Tuesday 10 March 2026 00:53:25 +0000 (0:00:02.516) 0:04:24.273 ********* 2026-03-10 00:56:26.941981 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.941986 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.941991 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.941997 | orchestrator | 2026-03-10 00:56:26.942002 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-10 00:56:26.942008 | orchestrator | Tuesday 10 March 2026 00:53:27 +0000 (0:00:01.418) 0:04:25.692 ********* 2026-03-10 00:56:26.942040 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.942047 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.942052 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.942057 | orchestrator | 2026-03-10 00:56:26.942062 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-10 00:56:26.942071 | orchestrator | Tuesday 10 March 2026 00:53:29 +0000 (0:00:02.198) 0:04:27.890 ********* 2026-03-10 00:56:26.942076 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:56:26.942082 | orchestrator | 2026-03-10 00:56:26.942088 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-10 00:56:26.942094 | orchestrator | Tuesday 10 March 2026 00:53:30 +0000 (0:00:01.270) 0:04:29.161 ********* 2026-03-10 00:56:26.942099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-10 00:56:26.942106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-10 00:56:26.942119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-10 00:56:26.942124 | orchestrator | 2026-03-10 00:56:26.942129 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-10 00:56:26.942134 | orchestrator | Tuesday 10 March 2026 00:53:35 +0000 (0:00:04.600) 0:04:33.762 ********* 2026-03-10 00:56:26.942138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-10 00:56:26.942146 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.942151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-10 00:56:26.942156 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.942161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-10 00:56:26.942166 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.942171 | orchestrator | 2026-03-10 00:56:26.942175 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-10 00:56:26.942180 | orchestrator | Tuesday 10 March 2026 00:53:35 +0000 (0:00:00.569) 0:04:34.331 ********* 2026-03-10 00:56:26.942185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-10 00:56:26.942189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-10 00:56:26.942196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-10 00:56:26.942202 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.942209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-10 00:56:26.942214 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.942219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-10 00:56:26.942227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-10 00:56:26.942232 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.942236 | orchestrator | 2026-03-10 00:56:26.942241 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-10 00:56:26.942246 | orchestrator | Tuesday 10 March 2026 00:53:36 +0000 (0:00:00.880) 0:04:35.212 ********* 2026-03-10 00:56:26.942250 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.942255 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.942259 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.942264 | orchestrator | 2026-03-10 00:56:26.942269 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-10 00:56:26.942273 | orchestrator | Tuesday 10 March 2026 00:53:38 +0000 (0:00:02.194) 0:04:37.407 ********* 2026-03-10 00:56:26.942278 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.942283 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.942287 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.942292 | orchestrator | 2026-03-10 00:56:26.942296 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-10 00:56:26.942301 | orchestrator | Tuesday 10 March 2026 00:53:40 +0000 (0:00:01.978) 0:04:39.386 ********* 2026-03-10 00:56:26.942306 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:56:26.942310 | orchestrator | 2026-03-10 00:56:26.942315 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-10 00:56:26.942320 | orchestrator | Tuesday 10 March 2026 00:53:42 +0000 (0:00:01.755) 0:04:41.141 ********* 2026-03-10 00:56:26.942325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-10 00:56:26.942330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.942341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.942351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-10 00:56:26.942356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.942361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.942366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-10 00:56:26.942380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.942385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.942390 | orchestrator | 2026-03-10 00:56:26.942394 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-10 00:56:26.942399 | orchestrator | Tuesday 10 March 2026 00:53:47 +0000 (0:00:05.086) 0:04:46.228 ********* 2026-03-10 00:56:26.942404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-10 00:56:26.942409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.942414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.942422 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.942433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-10 00:56:26.942438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.942443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.942448 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.942453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-10 00:56:26.942461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.942472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.942477 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.942482 | orchestrator | 2026-03-10 00:56:26.942486 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-10 00:56:26.942491 | orchestrator | Tuesday 10 March 2026 00:53:49 +0000 (0:00:01.464) 0:04:47.693 ********* 2026-03-10 00:56:26.942496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-10 00:56:26.942501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-10 00:56:26.942506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-10 00:56:26.942510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-10 00:56:26.942515 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.942520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-10 00:56:26.942525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-10 00:56:26.942529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-10 00:56:26.942534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-10 00:56:26.942539 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.942544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-10 00:56:26.942548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-10 00:56:26.942557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-10 00:56:26.942561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-10 00:56:26.942566 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.942571 | orchestrator | 2026-03-10 00:56:26.942575 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-10 00:56:26.942580 | orchestrator | Tuesday 10 March 2026 00:53:50 +0000 (0:00:01.001) 0:04:48.694 ********* 2026-03-10 00:56:26.942585 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.942589 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.942594 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.942598 | orchestrator | 2026-03-10 00:56:26.942603 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-10 00:56:26.942608 | orchestrator | Tuesday 10 March 2026 00:53:51 +0000 (0:00:01.646) 0:04:50.340 ********* 2026-03-10 00:56:26.942612 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.942617 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.942622 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.942643 | orchestrator | 2026-03-10 00:56:26.942655 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-10 00:56:26.942662 | orchestrator | Tuesday 10 March 2026 00:53:53 +0000 (0:00:02.016) 0:04:52.357 ********* 2026-03-10 00:56:26.942670 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:56:26.942675 | orchestrator | 2026-03-10 00:56:26.942679 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-10 00:56:26.942684 | orchestrator | Tuesday 10 March 2026 00:53:55 +0000 (0:00:01.661) 0:04:54.018 ********* 2026-03-10 00:56:26.942689 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-10 00:56:26.942694 | orchestrator | 2026-03-10 00:56:26.942698 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-10 00:56:26.942703 | orchestrator | Tuesday 10 March 2026 00:53:56 +0000 (0:00:00.868) 0:04:54.886 ********* 2026-03-10 00:56:26.942708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-10 00:56:26.942713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-10 00:56:26.942718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-10 00:56:26.942727 | orchestrator | 2026-03-10 00:56:26.942731 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-10 00:56:26.942736 | orchestrator | Tuesday 10 March 2026 00:54:00 +0000 (0:00:04.177) 0:04:59.064 ********* 2026-03-10 00:56:26.942741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-10 00:56:26.942746 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.942751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-10 00:56:26.942756 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.942760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-10 00:56:26.942765 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.942770 | orchestrator | 2026-03-10 00:56:26.942777 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-10 00:56:26.942782 | orchestrator | Tuesday 10 March 2026 00:54:02 +0000 (0:00:01.490) 0:05:00.555 ********* 2026-03-10 00:56:26.942790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-10 00:56:26.942796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-10 00:56:26.942801 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.942806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-10 00:56:26.942811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-10 00:56:26.942816 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.942820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-10 00:56:26.942825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-10 00:56:26.942833 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.942838 | orchestrator | 2026-03-10 00:56:26.942844 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-10 00:56:26.942851 | orchestrator | Tuesday 10 March 2026 00:54:03 +0000 (0:00:01.528) 0:05:02.083 ********* 2026-03-10 00:56:26.942858 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.942867 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.942878 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.942887 | orchestrator | 2026-03-10 00:56:26.942894 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-10 00:56:26.942901 | orchestrator | Tuesday 10 March 2026 00:54:06 +0000 (0:00:02.603) 0:05:04.687 ********* 2026-03-10 00:56:26.942908 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.942915 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.942922 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.942929 | orchestrator | 2026-03-10 00:56:26.942936 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-10 00:56:26.942943 | orchestrator | Tuesday 10 March 2026 00:54:09 +0000 (0:00:03.529) 0:05:08.217 ********* 2026-03-10 00:56:26.942951 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-10 00:56:26.942959 | orchestrator | 2026-03-10 00:56:26.942966 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-10 00:56:26.942973 | orchestrator | Tuesday 10 March 2026 00:54:11 +0000 (0:00:01.630) 0:05:09.847 ********* 2026-03-10 00:56:26.942981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-10 00:56:26.942989 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.942996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-10 00:56:26.943004 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.943021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-10 00:56:26.943027 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.943032 | orchestrator | 2026-03-10 00:56:26.943036 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-10 00:56:26.943041 | orchestrator | Tuesday 10 March 2026 00:54:12 +0000 (0:00:01.547) 0:05:11.395 ********* 2026-03-10 00:56:26.943046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-10 00:56:26.943056 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.943061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-10 00:56:26.943065 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.943070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-10 00:56:26.943075 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.943080 | orchestrator | 2026-03-10 00:56:26.943084 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-10 00:56:26.943089 | orchestrator | Tuesday 10 March 2026 00:54:14 +0000 (0:00:01.487) 0:05:12.882 ********* 2026-03-10 00:56:26.943094 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.943098 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.943103 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.943108 | orchestrator | 2026-03-10 00:56:26.943112 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-10 00:56:26.943117 | orchestrator | Tuesday 10 March 2026 00:54:16 +0000 (0:00:02.177) 0:05:15.060 ********* 2026-03-10 00:56:26.943122 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:56:26.943126 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:56:26.943131 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:56:26.943136 | orchestrator | 2026-03-10 00:56:26.943140 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-10 00:56:26.943145 | orchestrator | Tuesday 10 March 2026 00:54:19 +0000 (0:00:02.571) 0:05:17.631 ********* 2026-03-10 00:56:26.943150 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:56:26.943154 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:56:26.943159 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:56:26.943167 | orchestrator | 2026-03-10 00:56:26.943174 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-10 00:56:26.943181 | orchestrator | Tuesday 10 March 2026 00:54:22 +0000 (0:00:03.295) 0:05:20.927 ********* 2026-03-10 00:56:26.943187 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-10 00:56:26.943194 | orchestrator | 2026-03-10 00:56:26.943200 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-10 00:56:26.943207 | orchestrator | Tuesday 10 March 2026 00:54:23 +0000 (0:00:00.936) 0:05:21.864 ********* 2026-03-10 00:56:26.943219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-10 00:56:26.943233 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.943246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-10 00:56:26.943254 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.943262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-10 00:56:26.943270 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.943277 | orchestrator | 2026-03-10 00:56:26.943284 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-10 00:56:26.943291 | orchestrator | Tuesday 10 March 2026 00:54:24 +0000 (0:00:01.514) 0:05:23.378 ********* 2026-03-10 00:56:26.943299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-10 00:56:26.943305 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.943312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-10 00:56:26.943319 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.943326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-10 00:56:26.943333 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.943339 | orchestrator | 2026-03-10 00:56:26.943346 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-10 00:56:26.943352 | orchestrator | Tuesday 10 March 2026 00:54:26 +0000 (0:00:01.591) 0:05:24.969 ********* 2026-03-10 00:56:26.943359 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.943373 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.943380 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.943386 | orchestrator | 2026-03-10 00:56:26.943392 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-10 00:56:26.943400 | orchestrator | Tuesday 10 March 2026 00:54:28 +0000 (0:00:01.994) 0:05:26.964 ********* 2026-03-10 00:56:26.943407 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:56:26.943414 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:56:26.943422 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:56:26.943430 | orchestrator | 2026-03-10 00:56:26.943437 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-10 00:56:26.943444 | orchestrator | Tuesday 10 March 2026 00:54:31 +0000 (0:00:02.755) 0:05:29.720 ********* 2026-03-10 00:56:26.943451 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:56:26.943458 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:56:26.943464 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:56:26.943471 | orchestrator | 2026-03-10 00:56:26.943478 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-10 00:56:26.943486 | orchestrator | Tuesday 10 March 2026 00:54:35 +0000 (0:00:03.944) 0:05:33.665 ********* 2026-03-10 00:56:26.943499 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:56:26.943506 | orchestrator | 2026-03-10 00:56:26.943513 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-10 00:56:26.943524 | orchestrator | Tuesday 10 March 2026 00:54:37 +0000 (0:00:01.934) 0:05:35.600 ********* 2026-03-10 00:56:26.943533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-10 00:56:26.943542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-10 00:56:26.943549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-10 00:56:26.943563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-10 00:56:26.943571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-10 00:56:26.943587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-10 00:56:26.943596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-10 00:56:26.943604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-10 00:56:26.943611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.943618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.943683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-10 00:56:26.943698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-10 00:56:26.943711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-10 00:56:26.943719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-10 00:56:26.943728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.943735 | orchestrator | 2026-03-10 00:56:26.943743 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-10 00:56:26.943751 | orchestrator | Tuesday 10 March 2026 00:54:41 +0000 (0:00:04.054) 0:05:39.654 ********* 2026-03-10 00:56:26.943765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-10 00:56:26.943773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-10 00:56:26.943790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-10 00:56:26.943797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-10 00:56:26.943805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-10 00:56:26.943814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-10 00:56:26.943822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-10 00:56:26.943827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-10 00:56:26.943835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.943840 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.943850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.943854 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.943859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-10 00:56:26.943864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-10 00:56:26.943873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-10 00:56:26.943878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-10 00:56:26.943885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-10 00:56:26.943890 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.943895 | orchestrator | 2026-03-10 00:56:26.943899 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-10 00:56:26.943907 | orchestrator | Tuesday 10 March 2026 00:54:41 +0000 (0:00:00.790) 0:05:40.445 ********* 2026-03-10 00:56:26.943912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-10 00:56:26.943917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-10 00:56:26.943922 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.943927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-10 00:56:26.943931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-10 00:56:26.943936 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.943941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-10 00:56:26.943949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-10 00:56:26.943954 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.943958 | orchestrator | 2026-03-10 00:56:26.943963 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-10 00:56:26.943967 | orchestrator | Tuesday 10 March 2026 00:54:43 +0000 (0:00:01.703) 0:05:42.149 ********* 2026-03-10 00:56:26.943972 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.943976 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.943981 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.943986 | orchestrator | 2026-03-10 00:56:26.943993 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-10 00:56:26.944001 | orchestrator | Tuesday 10 March 2026 00:54:45 +0000 (0:00:01.434) 0:05:43.584 ********* 2026-03-10 00:56:26.944008 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.944016 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.944023 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.944030 | orchestrator | 2026-03-10 00:56:26.944037 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-10 00:56:26.944044 | orchestrator | Tuesday 10 March 2026 00:54:47 +0000 (0:00:02.327) 0:05:45.911 ********* 2026-03-10 00:56:26.944050 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:56:26.944058 | orchestrator | 2026-03-10 00:56:26.944065 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-10 00:56:26.944072 | orchestrator | Tuesday 10 March 2026 00:54:48 +0000 (0:00:01.422) 0:05:47.334 ********* 2026-03-10 00:56:26.944081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-10 00:56:26.944097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-10 00:56:26.944105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-10 00:56:26.944119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-10 00:56:26.944129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-10 00:56:26.944143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-10 00:56:26.944151 | orchestrator | 2026-03-10 00:56:26.944163 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-10 00:56:26.944171 | orchestrator | Tuesday 10 March 2026 00:54:54 +0000 (0:00:05.976) 0:05:53.310 ********* 2026-03-10 00:56:26.944179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-10 00:56:26.944196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-10 00:56:26.944204 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.944210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-10 00:56:26.944218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-10 00:56:26.944224 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.944232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-10 00:56:26.944240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-10 00:56:26.944246 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.944250 | orchestrator | 2026-03-10 00:56:26.944255 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-10 00:56:26.944259 | orchestrator | Tuesday 10 March 2026 00:54:55 +0000 (0:00:00.734) 0:05:54.045 ********* 2026-03-10 00:56:26.944264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-10 00:56:26.944269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-10 00:56:26.944274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-10 00:56:26.944279 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.944284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-10 00:56:26.944288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-10 00:56:26.944293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-10 00:56:26.944298 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.944303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-10 00:56:26.944314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-10 00:56:26.944321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-10 00:56:26.944326 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.944331 | orchestrator | 2026-03-10 00:56:26.944336 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-10 00:56:26.944340 | orchestrator | Tuesday 10 March 2026 00:54:56 +0000 (0:00:01.094) 0:05:55.140 ********* 2026-03-10 00:56:26.944345 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.944349 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.944354 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.944358 | orchestrator | 2026-03-10 00:56:26.944363 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-10 00:56:26.944368 | orchestrator | Tuesday 10 March 2026 00:54:57 +0000 (0:00:01.058) 0:05:56.198 ********* 2026-03-10 00:56:26.944372 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.944377 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.944381 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.944386 | orchestrator | 2026-03-10 00:56:26.944390 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-10 00:56:26.944395 | orchestrator | Tuesday 10 March 2026 00:54:59 +0000 (0:00:01.499) 0:05:57.698 ********* 2026-03-10 00:56:26.944399 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:56:26.944404 | orchestrator | 2026-03-10 00:56:26.944409 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-10 00:56:26.944413 | orchestrator | Tuesday 10 March 2026 00:55:00 +0000 (0:00:01.563) 0:05:59.261 ********* 2026-03-10 00:56:26.944418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-10 00:56:26.944423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 00:56:26.944428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:56:26.944433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:56:26.944448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 00:56:26.944454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-10 00:56:26.944458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 00:56:26.944464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-10 00:56:26.944469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:56:26.944473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:56:26.944483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 00:56:26.944495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 00:56:26.944501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:56:26.944506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:56:26.944510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 00:56:26.944516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-10 00:56:26.944528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-10 00:56:26.944536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:56:26.944542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:56:26.944547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-10 00:56:26.944553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-10 00:56:26.944604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-10 00:56:26.944645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:56:26.944657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:56:26.944665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-10 00:56:26.944671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-10 00:56:26.944676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-10 00:56:26.944681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:56:26.944690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:56:26.944695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-10 00:56:26.944699 | orchestrator | 2026-03-10 00:56:26.944707 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-10 00:56:26.944712 | orchestrator | Tuesday 10 March 2026 00:55:05 +0000 (0:00:05.001) 0:06:04.263 ********* 2026-03-10 00:56:26.944720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-10 00:56:26.944725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 00:56:26.944730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:56:26.944735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:56:26.944745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 00:56:26.944750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-10 00:56:26.944761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-10 00:56:26.944766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-10 00:56:26.944771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 00:56:26.944780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:56:26.944785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:56:26.944790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:56:26.944798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:56:26.944805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 00:56:26.944810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-10 00:56:26.944815 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.944820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-10 00:56:26.944830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-10 00:56:26.944835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:56:26.944843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:56:26.944851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-10 00:56:26.944855 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.944860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-10 00:56:26.944865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 00:56:26.944873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:56:26.944878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:56:26.944883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 00:56:26.944894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-10 00:56:26.944900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-10 00:56:26.944905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:56:26.944912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:56:26.944918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-10 00:56:26.944922 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.944927 | orchestrator | 2026-03-10 00:56:26.944932 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-10 00:56:26.944937 | orchestrator | Tuesday 10 March 2026 00:55:07 +0000 (0:00:01.637) 0:06:05.900 ********* 2026-03-10 00:56:26.944941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-10 00:56:26.944946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-10 00:56:26.944951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-10 00:56:26.944957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-10 00:56:26.944964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-10 00:56:26.944974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-10 00:56:26.944979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-10 00:56:26.944984 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.944989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-10 00:56:26.944994 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.945002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-10 00:56:26.945006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-10 00:56:26.945011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-10 00:56:26.945016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-10 00:56:26.945021 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.945025 | orchestrator | 2026-03-10 00:56:26.945030 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-10 00:56:26.945035 | orchestrator | Tuesday 10 March 2026 00:55:08 +0000 (0:00:01.264) 0:06:07.165 ********* 2026-03-10 00:56:26.945039 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.945044 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.945048 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.945053 | orchestrator | 2026-03-10 00:56:26.945057 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-10 00:56:26.945062 | orchestrator | Tuesday 10 March 2026 00:55:09 +0000 (0:00:00.540) 0:06:07.706 ********* 2026-03-10 00:56:26.945066 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.945071 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.945075 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.945080 | orchestrator | 2026-03-10 00:56:26.945085 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-10 00:56:26.945089 | orchestrator | Tuesday 10 March 2026 00:55:11 +0000 (0:00:01.835) 0:06:09.541 ********* 2026-03-10 00:56:26.945094 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:56:26.945099 | orchestrator | 2026-03-10 00:56:26.945103 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-10 00:56:26.945108 | orchestrator | Tuesday 10 March 2026 00:55:12 +0000 (0:00:01.981) 0:06:11.523 ********* 2026-03-10 00:56:26.945116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-10 00:56:26.945124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-10 00:56:26.945133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-10 00:56:26.945138 | orchestrator | 2026-03-10 00:56:26.945143 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-10 00:56:26.945147 | orchestrator | Tuesday 10 March 2026 00:55:15 +0000 (0:00:02.868) 0:06:14.392 ********* 2026-03-10 00:56:26.945152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-10 00:56:26.945157 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.945165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-10 00:56:26.945173 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.945181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-10 00:56:26.945186 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.945191 | orchestrator | 2026-03-10 00:56:26.945195 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-10 00:56:26.945200 | orchestrator | Tuesday 10 March 2026 00:55:16 +0000 (0:00:00.441) 0:06:14.833 ********* 2026-03-10 00:56:26.945204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-10 00:56:26.945209 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.945214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-10 00:56:26.945218 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.945223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-10 00:56:26.945228 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.945232 | orchestrator | 2026-03-10 00:56:26.945237 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-10 00:56:26.945241 | orchestrator | Tuesday 10 March 2026 00:55:17 +0000 (0:00:01.152) 0:06:15.985 ********* 2026-03-10 00:56:26.945246 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.945251 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.945255 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.945260 | orchestrator | 2026-03-10 00:56:26.945264 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-10 00:56:26.945269 | orchestrator | Tuesday 10 March 2026 00:55:17 +0000 (0:00:00.447) 0:06:16.432 ********* 2026-03-10 00:56:26.945273 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.945278 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.945283 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.945287 | orchestrator | 2026-03-10 00:56:26.945292 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-10 00:56:26.945296 | orchestrator | Tuesday 10 March 2026 00:55:19 +0000 (0:00:01.454) 0:06:17.887 ********* 2026-03-10 00:56:26.945301 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:56:26.945305 | orchestrator | 2026-03-10 00:56:26.945310 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-10 00:56:26.945314 | orchestrator | Tuesday 10 March 2026 00:55:21 +0000 (0:00:01.944) 0:06:19.832 ********* 2026-03-10 00:56:26.945319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-10 00:56:26.945334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-10 00:56:26.945339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-10 00:56:26.945344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-10 00:56:26.945349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-10 00:56:26.945363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-10 00:56:26.945368 | orchestrator | 2026-03-10 00:56:26.945373 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-10 00:56:26.945377 | orchestrator | Tuesday 10 March 2026 00:55:28 +0000 (0:00:07.075) 0:06:26.908 ********* 2026-03-10 00:56:26.945382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-10 00:56:26.945387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-10 00:56:26.945392 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.945396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-10 00:56:26.945410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-10 00:56:26.945415 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.945420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-10 00:56:26.945424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-10 00:56:26.945429 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.945434 | orchestrator | 2026-03-10 00:56:26.945438 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-10 00:56:26.945443 | orchestrator | Tuesday 10 March 2026 00:55:29 +0000 (0:00:00.717) 0:06:27.626 ********* 2026-03-10 00:56:26.945447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-10 00:56:26.945452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-10 00:56:26.945460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-10 00:56:26.945465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-10 00:56:26.945470 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.945474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-10 00:56:26.945479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-10 00:56:26.945483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-10 00:56:26.945491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-10 00:56:26.945496 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.945503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-10 00:56:26.945508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-10 00:56:26.945512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-10 00:56:26.945517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-10 00:56:26.945522 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.945526 | orchestrator | 2026-03-10 00:56:26.945531 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-10 00:56:26.945535 | orchestrator | Tuesday 10 March 2026 00:55:30 +0000 (0:00:01.876) 0:06:29.502 ********* 2026-03-10 00:56:26.945540 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.945545 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.945549 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.945557 | orchestrator | 2026-03-10 00:56:26.945564 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-10 00:56:26.945571 | orchestrator | Tuesday 10 March 2026 00:55:32 +0000 (0:00:01.401) 0:06:30.904 ********* 2026-03-10 00:56:26.945578 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.945585 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.945593 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.945601 | orchestrator | 2026-03-10 00:56:26.945609 | orchestrator | TASK [include_role : swift] **************************************************** 2026-03-10 00:56:26.945616 | orchestrator | Tuesday 10 March 2026 00:55:34 +0000 (0:00:02.329) 0:06:33.233 ********* 2026-03-10 00:56:26.945625 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.945651 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.945658 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.945663 | orchestrator | 2026-03-10 00:56:26.945668 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-10 00:56:26.945672 | orchestrator | Tuesday 10 March 2026 00:55:35 +0000 (0:00:00.362) 0:06:33.596 ********* 2026-03-10 00:56:26.945677 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.945681 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.945686 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.945691 | orchestrator | 2026-03-10 00:56:26.945695 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-10 00:56:26.945700 | orchestrator | Tuesday 10 March 2026 00:55:35 +0000 (0:00:00.397) 0:06:33.994 ********* 2026-03-10 00:56:26.945704 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.945709 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.945713 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.945718 | orchestrator | 2026-03-10 00:56:26.945723 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-10 00:56:26.945727 | orchestrator | Tuesday 10 March 2026 00:55:36 +0000 (0:00:00.806) 0:06:34.801 ********* 2026-03-10 00:56:26.945732 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.945737 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.945741 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.945746 | orchestrator | 2026-03-10 00:56:26.945751 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-10 00:56:26.945755 | orchestrator | Tuesday 10 March 2026 00:55:36 +0000 (0:00:00.360) 0:06:35.161 ********* 2026-03-10 00:56:26.945760 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.945764 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.945769 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.945773 | orchestrator | 2026-03-10 00:56:26.945778 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-10 00:56:26.945782 | orchestrator | Tuesday 10 March 2026 00:55:36 +0000 (0:00:00.340) 0:06:35.502 ********* 2026-03-10 00:56:26.945787 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.945791 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.945796 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.945800 | orchestrator | 2026-03-10 00:56:26.945805 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-10 00:56:26.945809 | orchestrator | Tuesday 10 March 2026 00:55:37 +0000 (0:00:00.946) 0:06:36.449 ********* 2026-03-10 00:56:26.945814 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:56:26.945819 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:56:26.945823 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:56:26.945828 | orchestrator | 2026-03-10 00:56:26.945832 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-10 00:56:26.945837 | orchestrator | Tuesday 10 March 2026 00:55:38 +0000 (0:00:00.723) 0:06:37.172 ********* 2026-03-10 00:56:26.945842 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:56:26.945846 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:56:26.945851 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:56:26.945855 | orchestrator | 2026-03-10 00:56:26.945860 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-10 00:56:26.945864 | orchestrator | Tuesday 10 March 2026 00:55:39 +0000 (0:00:00.359) 0:06:37.532 ********* 2026-03-10 00:56:26.945869 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:56:26.945874 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:56:26.945878 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:56:26.945883 | orchestrator | 2026-03-10 00:56:26.945891 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-10 00:56:26.945895 | orchestrator | Tuesday 10 March 2026 00:55:39 +0000 (0:00:00.943) 0:06:38.476 ********* 2026-03-10 00:56:26.945900 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:56:26.945904 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:56:26.945917 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:56:26.945922 | orchestrator | 2026-03-10 00:56:26.945926 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-10 00:56:26.945931 | orchestrator | Tuesday 10 March 2026 00:55:41 +0000 (0:00:01.317) 0:06:39.793 ********* 2026-03-10 00:56:26.945936 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:56:26.945940 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:56:26.945945 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:56:26.945949 | orchestrator | 2026-03-10 00:56:26.945954 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-10 00:56:26.945958 | orchestrator | Tuesday 10 March 2026 00:55:42 +0000 (0:00:01.104) 0:06:40.898 ********* 2026-03-10 00:56:26.945963 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.945968 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.945972 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.945977 | orchestrator | 2026-03-10 00:56:26.945981 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-10 00:56:26.945986 | orchestrator | Tuesday 10 March 2026 00:55:52 +0000 (0:00:10.115) 0:06:51.013 ********* 2026-03-10 00:56:26.945991 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:56:26.945995 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:56:26.946000 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:56:26.946004 | orchestrator | 2026-03-10 00:56:26.946009 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-10 00:56:26.946033 | orchestrator | Tuesday 10 March 2026 00:55:53 +0000 (0:00:00.804) 0:06:51.818 ********* 2026-03-10 00:56:26.946038 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.946042 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.946047 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.946051 | orchestrator | 2026-03-10 00:56:26.946056 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-10 00:56:26.946060 | orchestrator | Tuesday 10 March 2026 00:56:08 +0000 (0:00:15.701) 0:07:07.519 ********* 2026-03-10 00:56:26.946065 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:56:26.946072 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:56:26.946076 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:56:26.946081 | orchestrator | 2026-03-10 00:56:26.946085 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-10 00:56:26.946090 | orchestrator | Tuesday 10 March 2026 00:56:10 +0000 (0:00:01.507) 0:07:09.026 ********* 2026-03-10 00:56:26.946095 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:56:26.946099 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:56:26.946104 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:56:26.946108 | orchestrator | 2026-03-10 00:56:26.946113 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-10 00:56:26.946118 | orchestrator | Tuesday 10 March 2026 00:56:20 +0000 (0:00:09.699) 0:07:18.725 ********* 2026-03-10 00:56:26.946122 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.946127 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.946131 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.946136 | orchestrator | 2026-03-10 00:56:26.946140 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-10 00:56:26.946145 | orchestrator | Tuesday 10 March 2026 00:56:20 +0000 (0:00:00.391) 0:07:19.117 ********* 2026-03-10 00:56:26.946149 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.946154 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.946159 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.946163 | orchestrator | 2026-03-10 00:56:26.946168 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-10 00:56:26.946172 | orchestrator | Tuesday 10 March 2026 00:56:20 +0000 (0:00:00.366) 0:07:19.483 ********* 2026-03-10 00:56:26.946177 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.946181 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.946186 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.946190 | orchestrator | 2026-03-10 00:56:26.946198 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-10 00:56:26.946203 | orchestrator | Tuesday 10 March 2026 00:56:21 +0000 (0:00:00.778) 0:07:20.262 ********* 2026-03-10 00:56:26.946208 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.946212 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.946217 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.946221 | orchestrator | 2026-03-10 00:56:26.946226 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-10 00:56:26.946230 | orchestrator | Tuesday 10 March 2026 00:56:22 +0000 (0:00:00.475) 0:07:20.738 ********* 2026-03-10 00:56:26.946235 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.946239 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.946244 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.946248 | orchestrator | 2026-03-10 00:56:26.946253 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-10 00:56:26.946257 | orchestrator | Tuesday 10 March 2026 00:56:22 +0000 (0:00:00.359) 0:07:21.097 ********* 2026-03-10 00:56:26.946262 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:56:26.946266 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:56:26.946271 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:56:26.946275 | orchestrator | 2026-03-10 00:56:26.946280 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-10 00:56:26.946284 | orchestrator | Tuesday 10 March 2026 00:56:22 +0000 (0:00:00.363) 0:07:21.460 ********* 2026-03-10 00:56:26.946289 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:56:26.946294 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:56:26.946298 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:56:26.946303 | orchestrator | 2026-03-10 00:56:26.946307 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-10 00:56:26.946312 | orchestrator | Tuesday 10 March 2026 00:56:24 +0000 (0:00:01.461) 0:07:22.922 ********* 2026-03-10 00:56:26.946316 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:56:26.946321 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:56:26.946325 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:56:26.946330 | orchestrator | 2026-03-10 00:56:26.946334 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:56:26.946343 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-10 00:56:26.946350 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-10 00:56:26.946355 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-10 00:56:26.946360 | orchestrator | 2026-03-10 00:56:26.946364 | orchestrator | 2026-03-10 00:56:26.946369 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:56:26.946373 | orchestrator | Tuesday 10 March 2026 00:56:25 +0000 (0:00:00.851) 0:07:23.773 ********* 2026-03-10 00:56:26.946378 | orchestrator | =============================================================================== 2026-03-10 00:56:26.946383 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 15.70s 2026-03-10 00:56:26.946387 | orchestrator | loadbalancer : Start backup haproxy container -------------------------- 10.12s 2026-03-10 00:56:26.946392 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.70s 2026-03-10 00:56:26.946396 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 7.08s 2026-03-10 00:56:26.946401 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 6.53s 2026-03-10 00:56:26.946405 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 6.39s 2026-03-10 00:56:26.946410 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 6.32s 2026-03-10 00:56:26.946414 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 6.11s 2026-03-10 00:56:26.946423 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.98s 2026-03-10 00:56:26.946428 | orchestrator | loadbalancer : Copying over haproxy.cfg --------------------------------- 5.42s 2026-03-10 00:56:26.946432 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 5.34s 2026-03-10 00:56:26.946437 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.09s 2026-03-10 00:56:26.946441 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.00s 2026-03-10 00:56:26.946446 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 4.98s 2026-03-10 00:56:26.946450 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.72s 2026-03-10 00:56:26.946455 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.70s 2026-03-10 00:56:26.946460 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.64s 2026-03-10 00:56:26.946464 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 4.60s 2026-03-10 00:56:26.946469 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 4.47s 2026-03-10 00:56:26.946473 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.34s 2026-03-10 00:56:29.997941 | orchestrator | 2026-03-10 00:56:29 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:56:29.998895 | orchestrator | 2026-03-10 00:56:29 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:56:30.001210 | orchestrator | 2026-03-10 00:56:30 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:56:30.001292 | orchestrator | 2026-03-10 00:56:30 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:33.046940 | orchestrator | 2026-03-10 00:56:33 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:56:33.055447 | orchestrator | 2026-03-10 00:56:33 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:56:33.056459 | orchestrator | 2026-03-10 00:56:33 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:56:33.056492 | orchestrator | 2026-03-10 00:56:33 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:36.103700 | orchestrator | 2026-03-10 00:56:36 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:56:36.106614 | orchestrator | 2026-03-10 00:56:36 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:56:36.109165 | orchestrator | 2026-03-10 00:56:36 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:56:36.109515 | orchestrator | 2026-03-10 00:56:36 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:39.162277 | orchestrator | 2026-03-10 00:56:39 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:56:39.164819 | orchestrator | 2026-03-10 00:56:39 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:56:39.167488 | orchestrator | 2026-03-10 00:56:39 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:56:39.167840 | orchestrator | 2026-03-10 00:56:39 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:42.205801 | orchestrator | 2026-03-10 00:56:42 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:56:42.206847 | orchestrator | 2026-03-10 00:56:42 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:56:42.207712 | orchestrator | 2026-03-10 00:56:42 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:56:42.207902 | orchestrator | 2026-03-10 00:56:42 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:45.241365 | orchestrator | 2026-03-10 00:56:45 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:56:45.243201 | orchestrator | 2026-03-10 00:56:45 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:56:45.244997 | orchestrator | 2026-03-10 00:56:45 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:56:45.245390 | orchestrator | 2026-03-10 00:56:45 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:48.293540 | orchestrator | 2026-03-10 00:56:48 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:56:48.294147 | orchestrator | 2026-03-10 00:56:48 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:56:48.299258 | orchestrator | 2026-03-10 00:56:48 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:56:48.299330 | orchestrator | 2026-03-10 00:56:48 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:51.343878 | orchestrator | 2026-03-10 00:56:51 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:56:51.346744 | orchestrator | 2026-03-10 00:56:51 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:56:51.348205 | orchestrator | 2026-03-10 00:56:51 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:56:51.349537 | orchestrator | 2026-03-10 00:56:51 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:54.387964 | orchestrator | 2026-03-10 00:56:54 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:56:54.388073 | orchestrator | 2026-03-10 00:56:54 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:56:54.388905 | orchestrator | 2026-03-10 00:56:54 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:56:54.388940 | orchestrator | 2026-03-10 00:56:54 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:57.436149 | orchestrator | 2026-03-10 00:56:57 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:56:57.438258 | orchestrator | 2026-03-10 00:56:57 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:56:57.440101 | orchestrator | 2026-03-10 00:56:57 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:56:57.440154 | orchestrator | 2026-03-10 00:56:57 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:00.499268 | orchestrator | 2026-03-10 00:57:00 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:57:00.501393 | orchestrator | 2026-03-10 00:57:00 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:57:00.504516 | orchestrator | 2026-03-10 00:57:00 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:57:00.504566 | orchestrator | 2026-03-10 00:57:00 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:03.548309 | orchestrator | 2026-03-10 00:57:03 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:57:03.549054 | orchestrator | 2026-03-10 00:57:03 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:57:03.551124 | orchestrator | 2026-03-10 00:57:03 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:57:03.551190 | orchestrator | 2026-03-10 00:57:03 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:06.592084 | orchestrator | 2026-03-10 00:57:06 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:57:06.592710 | orchestrator | 2026-03-10 00:57:06 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:57:06.594871 | orchestrator | 2026-03-10 00:57:06 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:57:06.594936 | orchestrator | 2026-03-10 00:57:06 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:09.626282 | orchestrator | 2026-03-10 00:57:09 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:57:09.627729 | orchestrator | 2026-03-10 00:57:09 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:57:09.630136 | orchestrator | 2026-03-10 00:57:09 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:57:09.630503 | orchestrator | 2026-03-10 00:57:09 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:12.662658 | orchestrator | 2026-03-10 00:57:12 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:57:12.663141 | orchestrator | 2026-03-10 00:57:12 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:57:12.664944 | orchestrator | 2026-03-10 00:57:12 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:57:12.664978 | orchestrator | 2026-03-10 00:57:12 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:15.712547 | orchestrator | 2026-03-10 00:57:15 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:57:15.713571 | orchestrator | 2026-03-10 00:57:15 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:57:15.716332 | orchestrator | 2026-03-10 00:57:15 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:57:15.716393 | orchestrator | 2026-03-10 00:57:15 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:18.772757 | orchestrator | 2026-03-10 00:57:18 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:57:18.775033 | orchestrator | 2026-03-10 00:57:18 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:57:18.776799 | orchestrator | 2026-03-10 00:57:18 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:57:18.776849 | orchestrator | 2026-03-10 00:57:18 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:21.818106 | orchestrator | 2026-03-10 00:57:21 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:57:21.818268 | orchestrator | 2026-03-10 00:57:21 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:57:21.819268 | orchestrator | 2026-03-10 00:57:21 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:57:21.819289 | orchestrator | 2026-03-10 00:57:21 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:24.862164 | orchestrator | 2026-03-10 00:57:24 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:57:24.864839 | orchestrator | 2026-03-10 00:57:24 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:57:24.868107 | orchestrator | 2026-03-10 00:57:24 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:57:24.868253 | orchestrator | 2026-03-10 00:57:24 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:27.916989 | orchestrator | 2026-03-10 00:57:27 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:57:27.917165 | orchestrator | 2026-03-10 00:57:27 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:57:27.917878 | orchestrator | 2026-03-10 00:57:27 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:57:27.917974 | orchestrator | 2026-03-10 00:57:27 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:30.962229 | orchestrator | 2026-03-10 00:57:30 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:57:30.964718 | orchestrator | 2026-03-10 00:57:30 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:57:30.966187 | orchestrator | 2026-03-10 00:57:30 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:57:30.966259 | orchestrator | 2026-03-10 00:57:30 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:34.024005 | orchestrator | 2026-03-10 00:57:34 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:57:34.026324 | orchestrator | 2026-03-10 00:57:34 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:57:34.028426 | orchestrator | 2026-03-10 00:57:34 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:57:34.028531 | orchestrator | 2026-03-10 00:57:34 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:37.073783 | orchestrator | 2026-03-10 00:57:37 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:57:37.075411 | orchestrator | 2026-03-10 00:57:37 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:57:37.078301 | orchestrator | 2026-03-10 00:57:37 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:57:37.078396 | orchestrator | 2026-03-10 00:57:37 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:40.123879 | orchestrator | 2026-03-10 00:57:40 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:57:40.127066 | orchestrator | 2026-03-10 00:57:40 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:57:40.130282 | orchestrator | 2026-03-10 00:57:40 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:57:40.130407 | orchestrator | 2026-03-10 00:57:40 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:43.176159 | orchestrator | 2026-03-10 00:57:43 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:57:43.177486 | orchestrator | 2026-03-10 00:57:43 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:57:43.178959 | orchestrator | 2026-03-10 00:57:43 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:57:43.179008 | orchestrator | 2026-03-10 00:57:43 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:46.233761 | orchestrator | 2026-03-10 00:57:46 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:57:46.234897 | orchestrator | 2026-03-10 00:57:46 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:57:46.236670 | orchestrator | 2026-03-10 00:57:46 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:57:46.236706 | orchestrator | 2026-03-10 00:57:46 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:49.279462 | orchestrator | 2026-03-10 00:57:49 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:57:49.281744 | orchestrator | 2026-03-10 00:57:49 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:57:49.284576 | orchestrator | 2026-03-10 00:57:49 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:57:49.284640 | orchestrator | 2026-03-10 00:57:49 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:52.333195 | orchestrator | 2026-03-10 00:57:52 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:57:52.335658 | orchestrator | 2026-03-10 00:57:52 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:57:52.337303 | orchestrator | 2026-03-10 00:57:52 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:57:52.337370 | orchestrator | 2026-03-10 00:57:52 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:55.383205 | orchestrator | 2026-03-10 00:57:55 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:57:55.383937 | orchestrator | 2026-03-10 00:57:55 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:57:55.385475 | orchestrator | 2026-03-10 00:57:55 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:57:55.385520 | orchestrator | 2026-03-10 00:57:55 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:58.441154 | orchestrator | 2026-03-10 00:57:58 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:57:58.442378 | orchestrator | 2026-03-10 00:57:58 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:57:58.444067 | orchestrator | 2026-03-10 00:57:58 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:57:58.444115 | orchestrator | 2026-03-10 00:57:58 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:01.511387 | orchestrator | 2026-03-10 00:58:01 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:58:01.514153 | orchestrator | 2026-03-10 00:58:01 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:58:01.516809 | orchestrator | 2026-03-10 00:58:01 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:58:01.516873 | orchestrator | 2026-03-10 00:58:01 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:04.557156 | orchestrator | 2026-03-10 00:58:04 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:58:04.557854 | orchestrator | 2026-03-10 00:58:04 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:58:04.559101 | orchestrator | 2026-03-10 00:58:04 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:58:04.559218 | orchestrator | 2026-03-10 00:58:04 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:07.602600 | orchestrator | 2026-03-10 00:58:07 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:58:07.604744 | orchestrator | 2026-03-10 00:58:07 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:58:07.606788 | orchestrator | 2026-03-10 00:58:07 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:58:07.607502 | orchestrator | 2026-03-10 00:58:07 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:10.662741 | orchestrator | 2026-03-10 00:58:10 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:58:10.664295 | orchestrator | 2026-03-10 00:58:10 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:58:10.666737 | orchestrator | 2026-03-10 00:58:10 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:58:10.667267 | orchestrator | 2026-03-10 00:58:10 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:13.707113 | orchestrator | 2026-03-10 00:58:13 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:58:13.707947 | orchestrator | 2026-03-10 00:58:13 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:58:13.708950 | orchestrator | 2026-03-10 00:58:13 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:58:13.708983 | orchestrator | 2026-03-10 00:58:13 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:16.746283 | orchestrator | 2026-03-10 00:58:16 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:58:16.747969 | orchestrator | 2026-03-10 00:58:16 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:58:16.749353 | orchestrator | 2026-03-10 00:58:16 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:58:16.749442 | orchestrator | 2026-03-10 00:58:16 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:19.792505 | orchestrator | 2026-03-10 00:58:19 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:58:19.793349 | orchestrator | 2026-03-10 00:58:19 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state STARTED 2026-03-10 00:58:19.794384 | orchestrator | 2026-03-10 00:58:19 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:58:19.794410 | orchestrator | 2026-03-10 00:58:19 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:22.831476 | orchestrator | 2026-03-10 00:58:22 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 00:58:22.833397 | orchestrator | 2026-03-10 00:58:22 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:58:22.838404 | orchestrator | 2026-03-10 00:58:22 | INFO  | Task 55ef4970-9c46-45ba-9d9b-eaa23d0ab170 is in state SUCCESS 2026-03-10 00:58:22.838676 | orchestrator | 2026-03-10 00:58:22.841351 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-10 00:58:22.842245 | orchestrator | 2.16.14 2026-03-10 00:58:22.842291 | orchestrator | 2026-03-10 00:58:22.842303 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-10 00:58:22.842315 | orchestrator | 2026-03-10 00:58:22.842326 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-10 00:58:22.842337 | orchestrator | Tuesday 10 March 2026 00:46:29 +0000 (0:00:00.660) 0:00:00.660 ********* 2026-03-10 00:58:22.842349 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:58:22.842359 | orchestrator | 2026-03-10 00:58:22.842369 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-10 00:58:22.842379 | orchestrator | Tuesday 10 March 2026 00:46:30 +0000 (0:00:01.063) 0:00:01.723 ********* 2026-03-10 00:58:22.842391 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.842401 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.842411 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.842421 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.842432 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.842444 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.842454 | orchestrator | 2026-03-10 00:58:22.842465 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-10 00:58:22.842476 | orchestrator | Tuesday 10 March 2026 00:46:31 +0000 (0:00:01.581) 0:00:03.305 ********* 2026-03-10 00:58:22.842541 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.842554 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.842564 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.842590 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.842597 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.842603 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.842609 | orchestrator | 2026-03-10 00:58:22.842615 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-10 00:58:22.842622 | orchestrator | Tuesday 10 March 2026 00:46:32 +0000 (0:00:00.842) 0:00:04.147 ********* 2026-03-10 00:58:22.842628 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.842634 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.842640 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.842646 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.842652 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.842658 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.842664 | orchestrator | 2026-03-10 00:58:22.842670 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-10 00:58:22.842676 | orchestrator | Tuesday 10 March 2026 00:46:33 +0000 (0:00:01.061) 0:00:05.209 ********* 2026-03-10 00:58:22.842682 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.842688 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.842694 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.842700 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.842706 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.842712 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.842718 | orchestrator | 2026-03-10 00:58:22.842724 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-10 00:58:22.842731 | orchestrator | Tuesday 10 March 2026 00:46:34 +0000 (0:00:00.680) 0:00:05.889 ********* 2026-03-10 00:58:22.842737 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.842743 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.842749 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.842755 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.842761 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.842768 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.842774 | orchestrator | 2026-03-10 00:58:22.842780 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-10 00:58:22.842786 | orchestrator | Tuesday 10 March 2026 00:46:34 +0000 (0:00:00.568) 0:00:06.458 ********* 2026-03-10 00:58:22.842792 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.842798 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.842804 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.842810 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.842816 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.842822 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.842828 | orchestrator | 2026-03-10 00:58:22.842834 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-10 00:58:22.842841 | orchestrator | Tuesday 10 March 2026 00:46:35 +0000 (0:00:00.922) 0:00:07.381 ********* 2026-03-10 00:58:22.842847 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.842854 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.842860 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.842866 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.842872 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.842878 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.842884 | orchestrator | 2026-03-10 00:58:22.842890 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-10 00:58:22.842897 | orchestrator | Tuesday 10 March 2026 00:46:36 +0000 (0:00:00.993) 0:00:08.375 ********* 2026-03-10 00:58:22.842903 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.842909 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.842915 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.842921 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.842927 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.842939 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.842945 | orchestrator | 2026-03-10 00:58:22.842955 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-10 00:58:22.842966 | orchestrator | Tuesday 10 March 2026 00:46:37 +0000 (0:00:00.775) 0:00:09.150 ********* 2026-03-10 00:58:22.842976 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-10 00:58:22.842987 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-10 00:58:22.842996 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-10 00:58:22.843006 | orchestrator | 2026-03-10 00:58:22.843016 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-10 00:58:22.843026 | orchestrator | Tuesday 10 March 2026 00:46:38 +0000 (0:00:00.699) 0:00:09.849 ********* 2026-03-10 00:58:22.843037 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.843048 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.843059 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.843085 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.843091 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.843098 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.843104 | orchestrator | 2026-03-10 00:58:22.843110 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-10 00:58:22.843116 | orchestrator | Tuesday 10 March 2026 00:46:39 +0000 (0:00:01.282) 0:00:11.132 ********* 2026-03-10 00:58:22.843122 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-10 00:58:22.843129 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-10 00:58:22.843135 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-10 00:58:22.843141 | orchestrator | 2026-03-10 00:58:22.843147 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-10 00:58:22.843153 | orchestrator | Tuesday 10 March 2026 00:46:41 +0000 (0:00:02.170) 0:00:13.302 ********* 2026-03-10 00:58:22.843159 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-10 00:58:22.843167 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-10 00:58:22.843173 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-10 00:58:22.843179 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.843185 | orchestrator | 2026-03-10 00:58:22.843191 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-10 00:58:22.843197 | orchestrator | Tuesday 10 March 2026 00:46:42 +0000 (0:00:00.746) 0:00:14.049 ********* 2026-03-10 00:58:22.843210 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.843220 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.843226 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.843233 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.843239 | orchestrator | 2026-03-10 00:58:22.843245 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-10 00:58:22.843251 | orchestrator | Tuesday 10 March 2026 00:46:43 +0000 (0:00:00.964) 0:00:15.014 ********* 2026-03-10 00:58:22.843259 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.843273 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.843280 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.843286 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.843293 | orchestrator | 2026-03-10 00:58:22.843299 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-10 00:58:22.843305 | orchestrator | Tuesday 10 March 2026 00:46:44 +0000 (0:00:00.566) 0:00:15.580 ********* 2026-03-10 00:58:22.843332 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-10 00:46:40.218962', 'end': '2026-03-10 00:46:40.307291', 'delta': '0:00:00.088329', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.843342 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-10 00:46:40.874954', 'end': '2026-03-10 00:46:40.973695', 'delta': '0:00:00.098741', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.843352 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-10 00:46:41.490445', 'end': '2026-03-10 00:46:41.580861', 'delta': '0:00:00.090416', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.843363 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.843373 | orchestrator | 2026-03-10 00:58:22.843384 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-10 00:58:22.843395 | orchestrator | Tuesday 10 March 2026 00:46:44 +0000 (0:00:00.498) 0:00:16.078 ********* 2026-03-10 00:58:22.843404 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.843421 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.843431 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.843440 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.843451 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.843461 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.843471 | orchestrator | 2026-03-10 00:58:22.843481 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-10 00:58:22.843491 | orchestrator | Tuesday 10 March 2026 00:46:47 +0000 (0:00:02.504) 0:00:18.583 ********* 2026-03-10 00:58:22.843501 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-10 00:58:22.843512 | orchestrator | 2026-03-10 00:58:22.843596 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-10 00:58:22.843607 | orchestrator | Tuesday 10 March 2026 00:46:48 +0000 (0:00:01.305) 0:00:19.888 ********* 2026-03-10 00:58:22.843617 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.843628 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.843639 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.843650 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.843660 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.843671 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.843680 | orchestrator | 2026-03-10 00:58:22.843691 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-10 00:58:22.843699 | orchestrator | Tuesday 10 March 2026 00:46:51 +0000 (0:00:02.941) 0:00:22.829 ********* 2026-03-10 00:58:22.843705 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.843711 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.843717 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.843724 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.843729 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.843735 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.843741 | orchestrator | 2026-03-10 00:58:22.843748 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-10 00:58:22.843754 | orchestrator | Tuesday 10 March 2026 00:46:53 +0000 (0:00:01.869) 0:00:24.699 ********* 2026-03-10 00:58:22.843760 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.843766 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.843772 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.843778 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.843784 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.843790 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.843796 | orchestrator | 2026-03-10 00:58:22.843802 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-10 00:58:22.843808 | orchestrator | Tuesday 10 March 2026 00:46:55 +0000 (0:00:02.599) 0:00:27.298 ********* 2026-03-10 00:58:22.843814 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.843820 | orchestrator | 2026-03-10 00:58:22.843827 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-10 00:58:22.843833 | orchestrator | Tuesday 10 March 2026 00:46:56 +0000 (0:00:00.425) 0:00:27.724 ********* 2026-03-10 00:58:22.843839 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.843845 | orchestrator | 2026-03-10 00:58:22.843851 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-10 00:58:22.843857 | orchestrator | Tuesday 10 March 2026 00:46:56 +0000 (0:00:00.517) 0:00:28.241 ********* 2026-03-10 00:58:22.843864 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.843870 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.843876 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.843901 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.843908 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.843914 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.843920 | orchestrator | 2026-03-10 00:58:22.843926 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-10 00:58:22.843933 | orchestrator | Tuesday 10 March 2026 00:46:58 +0000 (0:00:01.395) 0:00:29.637 ********* 2026-03-10 00:58:22.843949 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.843960 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.843970 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.843980 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.843991 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.844002 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.844012 | orchestrator | 2026-03-10 00:58:22.844022 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-10 00:58:22.844033 | orchestrator | Tuesday 10 March 2026 00:46:59 +0000 (0:00:01.816) 0:00:31.454 ********* 2026-03-10 00:58:22.844041 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.844047 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.844053 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.844059 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.844065 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.844071 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.844077 | orchestrator | 2026-03-10 00:58:22.844084 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-10 00:58:22.844090 | orchestrator | Tuesday 10 March 2026 00:47:01 +0000 (0:00:01.407) 0:00:32.861 ********* 2026-03-10 00:58:22.844096 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.844102 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.844114 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.844121 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.844127 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.844133 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.844139 | orchestrator | 2026-03-10 00:58:22.844145 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-10 00:58:22.844152 | orchestrator | Tuesday 10 March 2026 00:47:02 +0000 (0:00:01.313) 0:00:34.175 ********* 2026-03-10 00:58:22.844158 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.844164 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.844170 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.844176 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.844182 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.844189 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.844195 | orchestrator | 2026-03-10 00:58:22.844201 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-10 00:58:22.844207 | orchestrator | Tuesday 10 March 2026 00:47:03 +0000 (0:00:00.895) 0:00:35.071 ********* 2026-03-10 00:58:22.844214 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.844220 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.844226 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.844232 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.844238 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.844244 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.844250 | orchestrator | 2026-03-10 00:58:22.844257 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-10 00:58:22.844263 | orchestrator | Tuesday 10 March 2026 00:47:04 +0000 (0:00:01.298) 0:00:36.369 ********* 2026-03-10 00:58:22.844269 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.844275 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.844282 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.844288 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.844294 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.844301 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.844307 | orchestrator | 2026-03-10 00:58:22.844313 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-10 00:58:22.844319 | orchestrator | Tuesday 10 March 2026 00:47:06 +0000 (0:00:01.402) 0:00:37.771 ********* 2026-03-10 00:58:22.844327 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--120d91ae--c06d--5ca9--b450--85f2d491e96a-osd--block--120d91ae--c06d--5ca9--b450--85f2d491e96a', 'dm-uuid-LVM-WfmIIUFFJw2jaM2wZ94MbXTIU1Q3uideiEjkxN1GdAfLt9tXghZfQML4bXOjdvSs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844345 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--07a8a029--b5c8--5530--8cc4--5b47064bbf55-osd--block--07a8a029--b5c8--5530--8cc4--5b47064bbf55', 'dm-uuid-LVM-oOevMLZLCWnJUTHGrEuKA1BjH5ndFznrD7OJhL26FbW5qogkNfLj60PsbnIbd0ju'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844360 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844368 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ba4e8e90--9c8a--5143--9418--e7ec5f1bd32d-osd--block--ba4e8e90--9c8a--5143--9418--e7ec5f1bd32d', 'dm-uuid-LVM-58MU5grZlunTSBffmwjK3vjz0g18XyyLY7eFQxOxvS4FOsGnTwrKX832BjExbi3v'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844378 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e8bae358--0d63--5788--ab6b--8bf409d6bda1-osd--block--e8bae358--0d63--5788--ab6b--8bf409d6bda1', 'dm-uuid-LVM-GuL0AeHVbbPblhWrBdLlyHKriwiZzQrZ4wTuRxRp3e6akvf3J1KcLrsLm9c2Jl40'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844385 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844392 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844399 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844411 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844418 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844424 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844436 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844442 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844449 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844465 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844472 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844492 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf', 'scsi-SQEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf-part1', 'scsi-SQEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf-part14', 'scsi-SQEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf-part15', 'scsi-SQEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf-part16', 'scsi-SQEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 00:58:22.844509 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--120d91ae--c06d--5ca9--b450--85f2d491e96a-osd--block--120d91ae--c06d--5ca9--b450--85f2d491e96a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EcjQSV-zkQb-m0mE-rsXO-uEtO-mPLD-c47yw4', 'scsi-0QEMU_QEMU_HARDDISK_a252bbef-4467-4af4-a387-4994b1c9e49a', 'scsi-SQEMU_QEMU_HARDDISK_a252bbef-4467-4af4-a387-4994b1c9e49a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 00:58:22.844539 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844548 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c0742eba--6300--5cfa--b498--a3704e14c384-osd--block--c0742eba--6300--5cfa--b498--a3704e14c384', 'dm-uuid-LVM-GM8tC80SUSXkY6Qfq6Ug21NaheiJUcGkkVa35BA8c8B9VfexNV4oAMnIiqhJM006'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844562 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--07a8a029--b5c8--5530--8cc4--5b47064bbf55-osd--block--07a8a029--b5c8--5530--8cc4--5b47064bbf55'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2KaSBt-SqHa-oUG8-yCxe-3388-hb3b-0vmN9g', 'scsi-0QEMU_QEMU_HARDDISK_f86d111d-1a96-4282-a6fb-aea85f8e4c5d', 'scsi-SQEMU_QEMU_HARDDISK_f86d111d-1a96-4282-a6fb-aea85f8e4c5d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 00:58:22.844569 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844576 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c217fde-a42a-4606-a0be-96745b6d50a1', 'scsi-SQEMU_QEMU_HARDDISK_0c217fde-a42a-4606-a0be-96745b6d50a1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 00:58:22.844588 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--45abfd4e--fefd--5ba8--aea8--e55d74ffeda2-osd--block--45abfd4e--fefd--5ba8--aea8--e55d74ffeda2', 'dm-uuid-LVM-4k4kWJfxNe70XsJuzSaKOwSI0cLsfXJ7e6TWSi3ulBkofIuygrkM5QKQOfYvIse0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844595 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-10-00-02-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 00:58:22.844613 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8', 'scsi-SQEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8-part1', 'scsi-SQEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8-part14', 'scsi-SQEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8-part15', 'scsi-SQEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8-part16', 'scsi-SQEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 00:58:22.844625 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844636 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ba4e8e90--9c8a--5143--9418--e7ec5f1bd32d-osd--block--ba4e8e90--9c8a--5143--9418--e7ec5f1bd32d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uFyQk7-vg9B-ByOU-mflV-hSyH-sHKs-jpRec5', 'scsi-0QEMU_QEMU_HARDDISK_1d3a34ea-f16d-4f10-8269-5937a58b6a14', 'scsi-SQEMU_QEMU_HARDDISK_1d3a34ea-f16d-4f10-8269-5937a58b6a14'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 00:58:22.844646 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--e8bae358--0d63--5788--ab6b--8bf409d6bda1-osd--block--e8bae358--0d63--5788--ab6b--8bf409d6bda1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fgGVlE-qFyq-X1v1-PNp2-Pgr0-sUfs-zMpfLG', 'scsi-0QEMU_QEMU_HARDDISK_b7d8aa34-d63a-4976-a853-b9d2680122e0', 'scsi-SQEMU_QEMU_HARDDISK_b7d8aa34-d63a-4976-a853-b9d2680122e0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 00:58:22.844653 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_497bc817-8b42-47c9-935c-36bd3332f08b', 'scsi-SQEMU_QEMU_HARDDISK_497bc817-8b42-47c9-935c-36bd3332f08b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 00:58:22.844664 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-10-00-02-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 00:58:22.844671 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844677 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.844684 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844691 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844702 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844709 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844718 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844725 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844747 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb', 'scsi-SQEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb-part1', 'scsi-SQEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb-part14', 'scsi-SQEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb-part15', 'scsi-SQEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb-part16', 'scsi-SQEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 00:58:22.844756 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c0742eba--6300--5cfa--b498--a3704e14c384-osd--block--c0742eba--6300--5cfa--b498--a3704e14c384'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FD9Jdg-KTpo-TrBO-XfEZ-qATc-twnM-Vnsrfh', 'scsi-0QEMU_QEMU_HARDDISK_fbc5b701-e3a2-4a57-9c09-bea5a2018a77', 'scsi-SQEMU_QEMU_HARDDISK_fbc5b701-e3a2-4a57-9c09-bea5a2018a77'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 00:58:22.844766 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--45abfd4e--fefd--5ba8--aea8--e55d74ffeda2-osd--block--45abfd4e--fefd--5ba8--aea8--e55d74ffeda2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ioGbpd-C8RH-QMZj-HqXN-GzQI-YX9i-rUnDFY', 'scsi-0QEMU_QEMU_HARDDISK_01fdf314-9dac-4cf9-86b2-8624031a3730', 'scsi-SQEMU_QEMU_HARDDISK_01fdf314-9dac-4cf9-86b2-8624031a3730'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 00:58:22.844779 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1827d390-92d5-42dc-b1df-e99337d10b88', 'scsi-SQEMU_QEMU_HARDDISK_1827d390-92d5-42dc-b1df-e99337d10b88'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 00:58:22.844786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844806 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-10-00-02-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 00:58:22.844817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.844952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1565e10-ffa2-449b-a353-9f25db04eeea', 'scsi-SQEMU_QEMU_HARDDISK_d1565e10-ffa2-449b-a353-9f25db04eeea'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1565e10-ffa2-449b-a353-9f25db04eeea-part1', 'scsi-SQEMU_QEMU_HARDDISK_d1565e10-ffa2-449b-a353-9f25db04eeea-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1565e10-ffa2-449b-a353-9f25db04eeea-part14', 'scsi-SQEMU_QEMU_HARDDISK_d1565e10-ffa2-449b-a353-9f25db04eeea-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1565e10-ffa2-449b-a353-9f25db04eeea-part15', 'scsi-SQEMU_QEMU_HARDDISK_d1565e10-ffa2-449b-a353-9f25db04eeea-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1565e10-ffa2-449b-a353-9f25db04eeea-part16', 'scsi-SQEMU_QEMU_HARDDISK_d1565e10-ffa2-449b-a353-9f25db04eeea-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 00:58:22.844971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-10-00-02-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 00:58:22.844982 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.844993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.845003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.845027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.845039 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.845050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.845061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.845070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.845079 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.845089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.845100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.845121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c2aff1f-fd64-4855-be93-56f20738751e', 'scsi-SQEMU_QEMU_HARDDISK_1c2aff1f-fd64-4855-be93-56f20738751e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c2aff1f-fd64-4855-be93-56f20738751e-part1', 'scsi-SQEMU_QEMU_HARDDISK_1c2aff1f-fd64-4855-be93-56f20738751e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c2aff1f-fd64-4855-be93-56f20738751e-part14', 'scsi-SQEMU_QEMU_HARDDISK_1c2aff1f-fd64-4855-be93-56f20738751e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c2aff1f-fd64-4855-be93-56f20738751e-part15', 'scsi-SQEMU_QEMU_HARDDISK_1c2aff1f-fd64-4855-be93-56f20738751e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c2aff1f-fd64-4855-be93-56f20738751e-part16', 'scsi-SQEMU_QEMU_HARDDISK_1c2aff1f-fd64-4855-be93-56f20738751e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 00:58:22.845140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-10-00-02-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 00:58:22.845151 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.845162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.845173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.845264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.845292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.845314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.845327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.845351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.845363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 00:58:22.845375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeb2096c-114a-4afe-90ec-6b353b021499', 'scsi-SQEMU_QEMU_HARDDISK_aeb2096c-114a-4afe-90ec-6b353b021499'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeb2096c-114a-4afe-90ec-6b353b021499-part1', 'scsi-SQEMU_QEMU_HARDDISK_aeb2096c-114a-4afe-90ec-6b353b021499-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeb2096c-114a-4afe-90ec-6b353b021499-part14', 'scsi-SQEMU_QEMU_HARDDISK_aeb2096c-114a-4afe-90ec-6b353b021499-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeb2096c-114a-4afe-90ec-6b353b021499-part15', 'scsi-SQEMU_QEMU_HARDDISK_aeb2096c-114a-4afe-90ec-6b353b021499-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeb2096c-114a-4afe-90ec-6b353b021499-part16', 'scsi-SQEMU_QEMU_HARDDISK_aeb2096c-114a-4afe-90ec-6b353b021499-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 00:58:22.845395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-10-00-02-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 00:58:22.845403 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.845410 | orchestrator | 2026-03-10 00:58:22.845416 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-10 00:58:22.845429 | orchestrator | Tuesday 10 March 2026 00:47:09 +0000 (0:00:02.945) 0:00:40.717 ********* 2026-03-10 00:58:22.845441 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--120d91ae--c06d--5ca9--b450--85f2d491e96a-osd--block--120d91ae--c06d--5ca9--b450--85f2d491e96a', 'dm-uuid-LVM-WfmIIUFFJw2jaM2wZ94MbXTIU1Q3uideiEjkxN1GdAfLt9tXghZfQML4bXOjdvSs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.845449 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--07a8a029--b5c8--5530--8cc4--5b47064bbf55-osd--block--07a8a029--b5c8--5530--8cc4--5b47064bbf55', 'dm-uuid-LVM-oOevMLZLCWnJUTHGrEuKA1BjH5ndFznrD7OJhL26FbW5qogkNfLj60PsbnIbd0ju'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.845456 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.845464 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.845471 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.845481 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.845494 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.845504 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ba4e8e90--9c8a--5143--9418--e7ec5f1bd32d-osd--block--ba4e8e90--9c8a--5143--9418--e7ec5f1bd32d', 'dm-uuid-LVM-58MU5grZlunTSBffmwjK3vjz0g18XyyLY7eFQxOxvS4FOsGnTwrKX832BjExbi3v'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.845511 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e8bae358--0d63--5788--ab6b--8bf409d6bda1-osd--block--e8bae358--0d63--5788--ab6b--8bf409d6bda1', 'dm-uuid-LVM-GuL0AeHVbbPblhWrBdLlyHKriwiZzQrZ4wTuRxRp3e6akvf3J1KcLrsLm9c2Jl40'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.845538 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.845546 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.845559 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.845570 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.845580 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.845587 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.845594 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.845600 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.845607 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.845623 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.845635 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf', 'scsi-SQEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf-part1', 'scsi-SQEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf-part14', 'scsi-SQEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf-part15', 'scsi-SQEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf-part16', 'scsi-SQEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.845642 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846241 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--120d91ae--c06d--5ca9--b450--85f2d491e96a-osd--block--120d91ae--c06d--5ca9--b450--85f2d491e96a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EcjQSV-zkQb-m0mE-rsXO-uEtO-mPLD-c47yw4', 'scsi-0QEMU_QEMU_HARDDISK_a252bbef-4467-4af4-a387-4994b1c9e49a', 'scsi-SQEMU_QEMU_HARDDISK_a252bbef-4467-4af4-a387-4994b1c9e49a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846293 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8', 'scsi-SQEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8-part1', 'scsi-SQEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8-part14', 'scsi-SQEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8-part15', 'scsi-SQEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8-part16', 'scsi-SQEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846302 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--07a8a029--b5c8--5530--8cc4--5b47064bbf55-osd--block--07a8a029--b5c8--5530--8cc4--5b47064bbf55'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2KaSBt-SqHa-oUG8-yCxe-3388-hb3b-0vmN9g', 'scsi-0QEMU_QEMU_HARDDISK_f86d111d-1a96-4282-a6fb-aea85f8e4c5d', 'scsi-SQEMU_QEMU_HARDDISK_f86d111d-1a96-4282-a6fb-aea85f8e4c5d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846317 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ba4e8e90--9c8a--5143--9418--e7ec5f1bd32d-osd--block--ba4e8e90--9c8a--5143--9418--e7ec5f1bd32d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uFyQk7-vg9B-ByOU-mflV-hSyH-sHKs-jpRec5', 'scsi-0QEMU_QEMU_HARDDISK_1d3a34ea-f16d-4f10-8269-5937a58b6a14', 'scsi-SQEMU_QEMU_HARDDISK_1d3a34ea-f16d-4f10-8269-5937a58b6a14'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846334 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c0742eba--6300--5cfa--b498--a3704e14c384-osd--block--c0742eba--6300--5cfa--b498--a3704e14c384', 'dm-uuid-LVM-GM8tC80SUSXkY6Qfq6Ug21NaheiJUcGkkVa35BA8c8B9VfexNV4oAMnIiqhJM006'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846354 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c217fde-a42a-4606-a0be-96745b6d50a1', 'scsi-SQEMU_QEMU_HARDDISK_0c217fde-a42a-4606-a0be-96745b6d50a1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846364 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--e8bae358--0d63--5788--ab6b--8bf409d6bda1-osd--block--e8bae358--0d63--5788--ab6b--8bf409d6bda1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fgGVlE-qFyq-X1v1-PNp2-Pgr0-sUfs-zMpfLG', 'scsi-0QEMU_QEMU_HARDDISK_b7d8aa34-d63a-4976-a853-b9d2680122e0', 'scsi-SQEMU_QEMU_HARDDISK_b7d8aa34-d63a-4976-a853-b9d2680122e0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846373 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--45abfd4e--fefd--5ba8--aea8--e55d74ffeda2-osd--block--45abfd4e--fefd--5ba8--aea8--e55d74ffeda2', 'dm-uuid-LVM-4k4kWJfxNe70XsJuzSaKOwSI0cLsfXJ7e6TWSi3ulBkofIuygrkM5QKQOfYvIse0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846396 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-10-00-02-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846408 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846424 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_497bc817-8b42-47c9-935c-36bd3332f08b', 'scsi-SQEMU_QEMU_HARDDISK_497bc817-8b42-47c9-935c-36bd3332f08b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846434 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846445 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-10-00-02-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846457 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846481 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846492 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.846504 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846547 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846561 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846571 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846590 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb', 'scsi-SQEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb-part1', 'scsi-SQEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb-part14', 'scsi-SQEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb-part15', 'scsi-SQEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb-part16', 'scsi-SQEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846616 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c0742eba--6300--5cfa--b498--a3704e14c384-osd--block--c0742eba--6300--5cfa--b498--a3704e14c384'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FD9Jdg-KTpo-TrBO-XfEZ-qATc-twnM-Vnsrfh', 'scsi-0QEMU_QEMU_HARDDISK_fbc5b701-e3a2-4a57-9c09-bea5a2018a77', 'scsi-SQEMU_QEMU_HARDDISK_fbc5b701-e3a2-4a57-9c09-bea5a2018a77'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846628 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--45abfd4e--fefd--5ba8--aea8--e55d74ffeda2-osd--block--45abfd4e--fefd--5ba8--aea8--e55d74ffeda2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ioGbpd-C8RH-QMZj-HqXN-GzQI-YX9i-rUnDFY', 'scsi-0QEMU_QEMU_HARDDISK_01fdf314-9dac-4cf9-86b2-8624031a3730', 'scsi-SQEMU_QEMU_HARDDISK_01fdf314-9dac-4cf9-86b2-8624031a3730'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846639 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1827d390-92d5-42dc-b1df-e99337d10b88', 'scsi-SQEMU_QEMU_HARDDISK_1827d390-92d5-42dc-b1df-e99337d10b88'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846665 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846677 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846693 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-10-00-02-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846704 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846715 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846726 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846744 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846760 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846775 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846787 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1565e10-ffa2-449b-a353-9f25db04eeea', 'scsi-SQEMU_QEMU_HARDDISK_d1565e10-ffa2-449b-a353-9f25db04eeea'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1565e10-ffa2-449b-a353-9f25db04eeea-part1', 'scsi-SQEMU_QEMU_HARDDISK_d1565e10-ffa2-449b-a353-9f25db04eeea-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1565e10-ffa2-449b-a353-9f25db04eeea-part14', 'scsi-SQEMU_QEMU_HARDDISK_d1565e10-ffa2-449b-a353-9f25db04eeea-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1565e10-ffa2-449b-a353-9f25db04eeea-part15', 'scsi-SQEMU_QEMU_HARDDISK_d1565e10-ffa2-449b-a353-9f25db04eeea-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1565e10-ffa2-449b-a353-9f25db04eeea-part16', 'scsi-SQEMU_QEMU_HARDDISK_d1565e10-ffa2-449b-a353-9f25db04eeea-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846810 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-10-00-02-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846823 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.846835 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846853 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846864 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846876 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846896 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846908 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846924 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846935 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846951 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c2aff1f-fd64-4855-be93-56f20738751e', 'scsi-SQEMU_QEMU_HARDDISK_1c2aff1f-fd64-4855-be93-56f20738751e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c2aff1f-fd64-4855-be93-56f20738751e-part1', 'scsi-SQEMU_QEMU_HARDDISK_1c2aff1f-fd64-4855-be93-56f20738751e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c2aff1f-fd64-4855-be93-56f20738751e-part14', 'scsi-SQEMU_QEMU_HARDDISK_1c2aff1f-fd64-4855-be93-56f20738751e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c2aff1f-fd64-4855-be93-56f20738751e-part15', 'scsi-SQEMU_QEMU_HARDDISK_1c2aff1f-fd64-4855-be93-56f20738751e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c2aff1f-fd64-4855-be93-56f20738751e-part16', 'scsi-SQEMU_QEMU_HARDDISK_1c2aff1f-fd64-4855-be93-56f20738751e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846971 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-10-00-02-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.846985 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.846997 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.847014 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.847026 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.847038 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.847049 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.847061 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.847079 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.847091 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.847109 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.847121 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.847133 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeb2096c-114a-4afe-90ec-6b353b021499', 'scsi-SQEMU_QEMU_HARDDISK_aeb2096c-114a-4afe-90ec-6b353b021499'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeb2096c-114a-4afe-90ec-6b353b021499-part1', 'scsi-SQEMU_QEMU_HARDDISK_aeb2096c-114a-4afe-90ec-6b353b021499-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeb2096c-114a-4afe-90ec-6b353b021499-part14', 'scsi-SQEMU_QEMU_HARDDISK_aeb2096c-114a-4afe-90ec-6b353b021499-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeb2096c-114a-4afe-90ec-6b353b021499-part15', 'scsi-SQEMU_QEMU_HARDDISK_aeb2096c-114a-4afe-90ec-6b353b021499-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeb2096c-114a-4afe-90ec-6b353b021499-part16', 'scsi-SQEMU_QEMU_HARDDISK_aeb2096c-114a-4afe-90ec-6b353b021499-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.847146 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-10-00-02-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 00:58:22.847155 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.847162 | orchestrator | 2026-03-10 00:58:22.847174 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-10 00:58:22.847182 | orchestrator | Tuesday 10 March 2026 00:47:10 +0000 (0:00:01.661) 0:00:42.379 ********* 2026-03-10 00:58:22.847193 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.847204 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.847214 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.847224 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.847233 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.847242 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.847251 | orchestrator | 2026-03-10 00:58:22.847261 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-10 00:58:22.847270 | orchestrator | Tuesday 10 March 2026 00:47:12 +0000 (0:00:01.855) 0:00:44.234 ********* 2026-03-10 00:58:22.847279 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.847289 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.847298 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.847307 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.847317 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.847326 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.847335 | orchestrator | 2026-03-10 00:58:22.847344 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-10 00:58:22.847354 | orchestrator | Tuesday 10 March 2026 00:47:13 +0000 (0:00:00.802) 0:00:45.037 ********* 2026-03-10 00:58:22.847363 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.847373 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.847382 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.847390 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.847399 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.847408 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.847416 | orchestrator | 2026-03-10 00:58:22.847425 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-10 00:58:22.847440 | orchestrator | Tuesday 10 March 2026 00:47:14 +0000 (0:00:01.116) 0:00:46.153 ********* 2026-03-10 00:58:22.847458 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.847467 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.847476 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.847486 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.847495 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.847505 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.847515 | orchestrator | 2026-03-10 00:58:22.847554 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-10 00:58:22.847563 | orchestrator | Tuesday 10 March 2026 00:47:16 +0000 (0:00:01.699) 0:00:47.853 ********* 2026-03-10 00:58:22.847573 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.847582 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.847592 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.847601 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.847612 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.847622 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.847630 | orchestrator | 2026-03-10 00:58:22.847640 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-10 00:58:22.847651 | orchestrator | Tuesday 10 March 2026 00:47:18 +0000 (0:00:02.417) 0:00:50.271 ********* 2026-03-10 00:58:22.847662 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.847672 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.847681 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.847691 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.847702 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.847711 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.847721 | orchestrator | 2026-03-10 00:58:22.847732 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-10 00:58:22.847743 | orchestrator | Tuesday 10 March 2026 00:47:20 +0000 (0:00:01.773) 0:00:52.044 ********* 2026-03-10 00:58:22.847752 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-10 00:58:22.847761 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-10 00:58:22.847770 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-10 00:58:22.847779 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-10 00:58:22.847788 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-10 00:58:22.847798 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-10 00:58:22.847808 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-10 00:58:22.847818 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-10 00:58:22.847828 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-10 00:58:22.847839 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-10 00:58:22.847848 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-10 00:58:22.847858 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-10 00:58:22.847868 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-10 00:58:22.847878 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-10 00:58:22.847888 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-10 00:58:22.847899 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-10 00:58:22.847909 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-10 00:58:22.847920 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-10 00:58:22.847930 | orchestrator | 2026-03-10 00:58:22.847939 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-10 00:58:22.847950 | orchestrator | Tuesday 10 March 2026 00:47:25 +0000 (0:00:04.826) 0:00:56.871 ********* 2026-03-10 00:58:22.847961 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-10 00:58:22.847972 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-10 00:58:22.847982 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-10 00:58:22.847994 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.848015 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-10 00:58:22.848025 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-10 00:58:22.848034 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-10 00:58:22.848045 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.848055 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-10 00:58:22.848077 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-10 00:58:22.848089 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-10 00:58:22.848099 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.848110 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-10 00:58:22.848121 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-10 00:58:22.848131 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-10 00:58:22.848142 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.848152 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-10 00:58:22.848163 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-10 00:58:22.848173 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-10 00:58:22.848183 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.848194 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-10 00:58:22.848205 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-10 00:58:22.848216 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-10 00:58:22.848227 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.848237 | orchestrator | 2026-03-10 00:58:22.848247 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-10 00:58:22.848258 | orchestrator | Tuesday 10 March 2026 00:47:26 +0000 (0:00:01.240) 0:00:58.112 ********* 2026-03-10 00:58:22.848268 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.848279 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.848289 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.848310 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:58:22.848322 | orchestrator | 2026-03-10 00:58:22.848333 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-10 00:58:22.848344 | orchestrator | Tuesday 10 March 2026 00:47:27 +0000 (0:00:01.407) 0:00:59.519 ********* 2026-03-10 00:58:22.848354 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.848364 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.848375 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.848386 | orchestrator | 2026-03-10 00:58:22.848397 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-10 00:58:22.848407 | orchestrator | Tuesday 10 March 2026 00:47:28 +0000 (0:00:00.584) 0:01:00.104 ********* 2026-03-10 00:58:22.848418 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.848429 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.848439 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.848450 | orchestrator | 2026-03-10 00:58:22.848460 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-10 00:58:22.848471 | orchestrator | Tuesday 10 March 2026 00:47:29 +0000 (0:00:00.745) 0:01:00.849 ********* 2026-03-10 00:58:22.848482 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.848492 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.848502 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.848513 | orchestrator | 2026-03-10 00:58:22.848574 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-10 00:58:22.848587 | orchestrator | Tuesday 10 March 2026 00:47:30 +0000 (0:00:00.934) 0:01:01.784 ********* 2026-03-10 00:58:22.848597 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.848617 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.848627 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.848638 | orchestrator | 2026-03-10 00:58:22.848648 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-10 00:58:22.848659 | orchestrator | Tuesday 10 March 2026 00:47:31 +0000 (0:00:01.270) 0:01:03.054 ********* 2026-03-10 00:58:22.848669 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-10 00:58:22.848680 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-10 00:58:22.848689 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-10 00:58:22.848698 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.848708 | orchestrator | 2026-03-10 00:58:22.848718 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-10 00:58:22.848729 | orchestrator | Tuesday 10 March 2026 00:47:32 +0000 (0:00:00.895) 0:01:03.950 ********* 2026-03-10 00:58:22.848740 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-10 00:58:22.848750 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-10 00:58:22.848761 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-10 00:58:22.848772 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.848783 | orchestrator | 2026-03-10 00:58:22.848794 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-10 00:58:22.848805 | orchestrator | Tuesday 10 March 2026 00:47:32 +0000 (0:00:00.556) 0:01:04.506 ********* 2026-03-10 00:58:22.848816 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-10 00:58:22.848826 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-10 00:58:22.848836 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-10 00:58:22.848846 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.848857 | orchestrator | 2026-03-10 00:58:22.848868 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-10 00:58:22.848878 | orchestrator | Tuesday 10 March 2026 00:47:33 +0000 (0:00:00.643) 0:01:05.150 ********* 2026-03-10 00:58:22.848889 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.848899 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.848909 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.848920 | orchestrator | 2026-03-10 00:58:22.848926 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-10 00:58:22.848933 | orchestrator | Tuesday 10 March 2026 00:47:33 +0000 (0:00:00.375) 0:01:05.526 ********* 2026-03-10 00:58:22.848939 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-10 00:58:22.848945 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-10 00:58:22.848965 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-10 00:58:22.848976 | orchestrator | 2026-03-10 00:58:22.848986 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-10 00:58:22.848995 | orchestrator | Tuesday 10 March 2026 00:47:35 +0000 (0:00:01.072) 0:01:06.599 ********* 2026-03-10 00:58:22.849007 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-10 00:58:22.849018 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-10 00:58:22.849029 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-10 00:58:22.849039 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-10 00:58:22.849050 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-10 00:58:22.849060 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-10 00:58:22.849071 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-10 00:58:22.849081 | orchestrator | 2026-03-10 00:58:22.849091 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-10 00:58:22.849102 | orchestrator | Tuesday 10 March 2026 00:47:36 +0000 (0:00:01.025) 0:01:07.624 ********* 2026-03-10 00:58:22.849134 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-10 00:58:22.849152 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-10 00:58:22.849163 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-10 00:58:22.849174 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-10 00:58:22.849184 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-10 00:58:22.849195 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-10 00:58:22.849202 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-10 00:58:22.849209 | orchestrator | 2026-03-10 00:58:22.849215 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-10 00:58:22.849221 | orchestrator | Tuesday 10 March 2026 00:47:38 +0000 (0:00:02.165) 0:01:09.789 ********* 2026-03-10 00:58:22.849229 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:58:22.849236 | orchestrator | 2026-03-10 00:58:22.849242 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-10 00:58:22.849249 | orchestrator | Tuesday 10 March 2026 00:47:39 +0000 (0:00:01.261) 0:01:11.051 ********* 2026-03-10 00:58:22.849256 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:58:22.849262 | orchestrator | 2026-03-10 00:58:22.849268 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-10 00:58:22.849274 | orchestrator | Tuesday 10 March 2026 00:47:41 +0000 (0:00:01.532) 0:01:12.583 ********* 2026-03-10 00:58:22.849281 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.849287 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.849293 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.849300 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.849306 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.849312 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.849318 | orchestrator | 2026-03-10 00:58:22.849324 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-10 00:58:22.849330 | orchestrator | Tuesday 10 March 2026 00:47:43 +0000 (0:00:02.654) 0:01:15.237 ********* 2026-03-10 00:58:22.849337 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.849343 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.849349 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.849355 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.849362 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.849368 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.849374 | orchestrator | 2026-03-10 00:58:22.849380 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-10 00:58:22.849387 | orchestrator | Tuesday 10 March 2026 00:47:45 +0000 (0:00:01.353) 0:01:16.591 ********* 2026-03-10 00:58:22.849393 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.849399 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.849405 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.849412 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.849419 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.849425 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.849432 | orchestrator | 2026-03-10 00:58:22.849438 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-10 00:58:22.849444 | orchestrator | Tuesday 10 March 2026 00:47:46 +0000 (0:00:01.269) 0:01:17.861 ********* 2026-03-10 00:58:22.849451 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.849457 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.849464 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.849475 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.849481 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.849487 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.849494 | orchestrator | 2026-03-10 00:58:22.849500 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-10 00:58:22.849507 | orchestrator | Tuesday 10 March 2026 00:47:47 +0000 (0:00:00.716) 0:01:18.577 ********* 2026-03-10 00:58:22.849513 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.849574 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.849587 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.849593 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.849599 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.849613 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.849620 | orchestrator | 2026-03-10 00:58:22.849626 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-10 00:58:22.849633 | orchestrator | Tuesday 10 March 2026 00:47:48 +0000 (0:00:01.375) 0:01:19.953 ********* 2026-03-10 00:58:22.849639 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.849645 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.849652 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.849658 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.849664 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.849670 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.849676 | orchestrator | 2026-03-10 00:58:22.849683 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-10 00:58:22.849689 | orchestrator | Tuesday 10 March 2026 00:47:49 +0000 (0:00:01.029) 0:01:20.982 ********* 2026-03-10 00:58:22.849695 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.849702 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.849708 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.849714 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.849720 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.849727 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.849733 | orchestrator | 2026-03-10 00:58:22.849739 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-10 00:58:22.849745 | orchestrator | Tuesday 10 March 2026 00:47:50 +0000 (0:00:01.444) 0:01:22.427 ********* 2026-03-10 00:58:22.849752 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.849758 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.849764 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.849771 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.849777 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.849788 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.849795 | orchestrator | 2026-03-10 00:58:22.849801 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-10 00:58:22.849808 | orchestrator | Tuesday 10 March 2026 00:47:52 +0000 (0:00:01.595) 0:01:24.022 ********* 2026-03-10 00:58:22.849814 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.849820 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.849826 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.849833 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.849839 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.849845 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.849851 | orchestrator | 2026-03-10 00:58:22.849858 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-10 00:58:22.849864 | orchestrator | Tuesday 10 March 2026 00:47:54 +0000 (0:00:01.722) 0:01:25.745 ********* 2026-03-10 00:58:22.849871 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.849877 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.849884 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.849890 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.849896 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.849903 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.849909 | orchestrator | 2026-03-10 00:58:22.849915 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-10 00:58:22.849928 | orchestrator | Tuesday 10 March 2026 00:47:55 +0000 (0:00:01.085) 0:01:26.831 ********* 2026-03-10 00:58:22.849935 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.849941 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.849949 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.849960 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.849970 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.849981 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.849992 | orchestrator | 2026-03-10 00:58:22.850003 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-10 00:58:22.850062 | orchestrator | Tuesday 10 March 2026 00:47:56 +0000 (0:00:01.644) 0:01:28.476 ********* 2026-03-10 00:58:22.850074 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.850080 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.850086 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.850093 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.850099 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.850106 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.850112 | orchestrator | 2026-03-10 00:58:22.850118 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-10 00:58:22.850125 | orchestrator | Tuesday 10 March 2026 00:47:58 +0000 (0:00:01.916) 0:01:30.393 ********* 2026-03-10 00:58:22.850131 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.850138 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.850144 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.850150 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.850156 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.850162 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.850168 | orchestrator | 2026-03-10 00:58:22.850175 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-10 00:58:22.850181 | orchestrator | Tuesday 10 March 2026 00:48:00 +0000 (0:00:01.676) 0:01:32.069 ********* 2026-03-10 00:58:22.850187 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.850193 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.850199 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.850206 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.850212 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.850218 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.850224 | orchestrator | 2026-03-10 00:58:22.850231 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-10 00:58:22.850237 | orchestrator | Tuesday 10 March 2026 00:48:01 +0000 (0:00:00.944) 0:01:33.014 ********* 2026-03-10 00:58:22.850243 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.850249 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.850255 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.850261 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.850267 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.850273 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.850279 | orchestrator | 2026-03-10 00:58:22.850286 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-10 00:58:22.850292 | orchestrator | Tuesday 10 March 2026 00:48:02 +0000 (0:00:01.365) 0:01:34.379 ********* 2026-03-10 00:58:22.850298 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.850305 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.850311 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.850317 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.850329 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.850335 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.850342 | orchestrator | 2026-03-10 00:58:22.850348 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-10 00:58:22.850355 | orchestrator | Tuesday 10 March 2026 00:48:03 +0000 (0:00:01.064) 0:01:35.443 ********* 2026-03-10 00:58:22.850361 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.850367 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.850387 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.850393 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.850400 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.850406 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.850412 | orchestrator | 2026-03-10 00:58:22.850419 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-10 00:58:22.850425 | orchestrator | Tuesday 10 March 2026 00:48:05 +0000 (0:00:01.284) 0:01:36.728 ********* 2026-03-10 00:58:22.850431 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.850437 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.850444 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.850450 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.850456 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.850462 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.850468 | orchestrator | 2026-03-10 00:58:22.850475 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-10 00:58:22.850481 | orchestrator | Tuesday 10 March 2026 00:48:06 +0000 (0:00:01.108) 0:01:37.837 ********* 2026-03-10 00:58:22.850487 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.850493 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.850499 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.850505 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.850511 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.850541 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.850549 | orchestrator | 2026-03-10 00:58:22.850555 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-10 00:58:22.850561 | orchestrator | Tuesday 10 March 2026 00:48:08 +0000 (0:00:02.451) 0:01:40.289 ********* 2026-03-10 00:58:22.850567 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:58:22.850574 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:58:22.850580 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:58:22.850586 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:58:22.850593 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:58:22.850599 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:58:22.850605 | orchestrator | 2026-03-10 00:58:22.850611 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-10 00:58:22.850618 | orchestrator | Tuesday 10 March 2026 00:48:10 +0000 (0:00:01.749) 0:01:42.038 ********* 2026-03-10 00:58:22.850624 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:58:22.850630 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:58:22.850637 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:58:22.850643 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:58:22.850649 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:58:22.850655 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:58:22.850662 | orchestrator | 2026-03-10 00:58:22.850668 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-10 00:58:22.850674 | orchestrator | Tuesday 10 March 2026 00:48:13 +0000 (0:00:02.666) 0:01:44.705 ********* 2026-03-10 00:58:22.850681 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:58:22.850687 | orchestrator | 2026-03-10 00:58:22.850693 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-10 00:58:22.850699 | orchestrator | Tuesday 10 March 2026 00:48:15 +0000 (0:00:02.011) 0:01:46.716 ********* 2026-03-10 00:58:22.850706 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.850712 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.850718 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.850727 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.850738 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.850748 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.850759 | orchestrator | 2026-03-10 00:58:22.850769 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-10 00:58:22.850777 | orchestrator | Tuesday 10 March 2026 00:48:15 +0000 (0:00:00.802) 0:01:47.518 ********* 2026-03-10 00:58:22.850798 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.850809 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.850818 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.850827 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.850837 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.850847 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.850856 | orchestrator | 2026-03-10 00:58:22.850866 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-10 00:58:22.850876 | orchestrator | Tuesday 10 March 2026 00:48:17 +0000 (0:00:01.091) 0:01:48.610 ********* 2026-03-10 00:58:22.850886 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-10 00:58:22.850896 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-10 00:58:22.850906 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-10 00:58:22.850915 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-10 00:58:22.850925 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-10 00:58:22.850937 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-10 00:58:22.850946 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-10 00:58:22.850956 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-10 00:58:22.850967 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-10 00:58:22.850977 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-10 00:58:22.851008 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-10 00:58:22.851021 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-10 00:58:22.851031 | orchestrator | 2026-03-10 00:58:22.851041 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-10 00:58:22.851050 | orchestrator | Tuesday 10 March 2026 00:48:18 +0000 (0:00:01.683) 0:01:50.294 ********* 2026-03-10 00:58:22.851059 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:58:22.851070 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:58:22.851080 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:58:22.851090 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:58:22.851100 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:58:22.851111 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:58:22.851121 | orchestrator | 2026-03-10 00:58:22.851132 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-10 00:58:22.851143 | orchestrator | Tuesday 10 March 2026 00:48:20 +0000 (0:00:01.959) 0:01:52.254 ********* 2026-03-10 00:58:22.851153 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.851163 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.851169 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.851176 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.851182 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.851188 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.851194 | orchestrator | 2026-03-10 00:58:22.851201 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-10 00:58:22.851207 | orchestrator | Tuesday 10 March 2026 00:48:21 +0000 (0:00:00.703) 0:01:52.958 ********* 2026-03-10 00:58:22.851220 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.851226 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.851233 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.851239 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.851246 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.851252 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.851265 | orchestrator | 2026-03-10 00:58:22.851272 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-10 00:58:22.851278 | orchestrator | Tuesday 10 March 2026 00:48:22 +0000 (0:00:00.950) 0:01:53.908 ********* 2026-03-10 00:58:22.851285 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.851291 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.851297 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.851303 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.851310 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.851316 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.851322 | orchestrator | 2026-03-10 00:58:22.851329 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-10 00:58:22.851335 | orchestrator | Tuesday 10 March 2026 00:48:23 +0000 (0:00:00.657) 0:01:54.566 ********* 2026-03-10 00:58:22.851342 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:58:22.851349 | orchestrator | 2026-03-10 00:58:22.851355 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-10 00:58:22.851361 | orchestrator | Tuesday 10 March 2026 00:48:24 +0000 (0:00:01.419) 0:01:55.985 ********* 2026-03-10 00:58:22.851368 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.851375 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.851381 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.851387 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.851393 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.851399 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.851405 | orchestrator | 2026-03-10 00:58:22.851412 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-10 00:58:22.851418 | orchestrator | Tuesday 10 March 2026 00:49:10 +0000 (0:00:46.335) 0:02:42.320 ********* 2026-03-10 00:58:22.851424 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-10 00:58:22.851431 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-10 00:58:22.851437 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-10 00:58:22.851444 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-10 00:58:22.851450 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-10 00:58:22.851456 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-10 00:58:22.851462 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.851469 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-10 00:58:22.851475 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-10 00:58:22.851481 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-10 00:58:22.851487 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.851494 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-10 00:58:22.851500 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-10 00:58:22.851506 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-10 00:58:22.851513 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.851610 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-10 00:58:22.851619 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-10 00:58:22.851625 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-10 00:58:22.851631 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.851638 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.851659 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-10 00:58:22.851673 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-10 00:58:22.851680 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-10 00:58:22.851686 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.851692 | orchestrator | 2026-03-10 00:58:22.851698 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-10 00:58:22.851704 | orchestrator | Tuesday 10 March 2026 00:49:11 +0000 (0:00:00.903) 0:02:43.224 ********* 2026-03-10 00:58:22.851711 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.851717 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.851723 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.851729 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.851736 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.851742 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.851748 | orchestrator | 2026-03-10 00:58:22.851754 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-10 00:58:22.851760 | orchestrator | Tuesday 10 March 2026 00:49:12 +0000 (0:00:01.320) 0:02:44.544 ********* 2026-03-10 00:58:22.851767 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.851773 | orchestrator | 2026-03-10 00:58:22.851779 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-10 00:58:22.851785 | orchestrator | Tuesday 10 March 2026 00:49:13 +0000 (0:00:00.184) 0:02:44.728 ********* 2026-03-10 00:58:22.851792 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.851798 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.851804 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.851815 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.851822 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.851828 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.851835 | orchestrator | 2026-03-10 00:58:22.851841 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-10 00:58:22.851847 | orchestrator | Tuesday 10 March 2026 00:49:13 +0000 (0:00:00.793) 0:02:45.521 ********* 2026-03-10 00:58:22.851854 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.851860 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.851867 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.851873 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.851879 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.851885 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.851891 | orchestrator | 2026-03-10 00:58:22.851897 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-10 00:58:22.851903 | orchestrator | Tuesday 10 March 2026 00:49:14 +0000 (0:00:00.896) 0:02:46.417 ********* 2026-03-10 00:58:22.851909 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.851916 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.851922 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.851928 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.851934 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.851940 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.851947 | orchestrator | 2026-03-10 00:58:22.851958 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-10 00:58:22.851968 | orchestrator | Tuesday 10 March 2026 00:49:15 +0000 (0:00:00.748) 0:02:47.166 ********* 2026-03-10 00:58:22.851978 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.851988 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.851997 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.852007 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.852018 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.852025 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.852031 | orchestrator | 2026-03-10 00:58:22.852037 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-10 00:58:22.852043 | orchestrator | Tuesday 10 March 2026 00:49:18 +0000 (0:00:02.523) 0:02:49.689 ********* 2026-03-10 00:58:22.852055 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.852062 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.852068 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.852074 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.852080 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.852086 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.852092 | orchestrator | 2026-03-10 00:58:22.852098 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-10 00:58:22.852105 | orchestrator | Tuesday 10 March 2026 00:49:19 +0000 (0:00:01.175) 0:02:50.865 ********* 2026-03-10 00:58:22.852112 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:58:22.852119 | orchestrator | 2026-03-10 00:58:22.852126 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-10 00:58:22.852132 | orchestrator | Tuesday 10 March 2026 00:49:20 +0000 (0:00:01.379) 0:02:52.244 ********* 2026-03-10 00:58:22.852138 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.852144 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.852150 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.852157 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.852163 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.852169 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.852176 | orchestrator | 2026-03-10 00:58:22.852182 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-10 00:58:22.852189 | orchestrator | Tuesday 10 March 2026 00:49:21 +0000 (0:00:00.769) 0:02:53.014 ********* 2026-03-10 00:58:22.852195 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.852201 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.852207 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.852213 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.852219 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.852225 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.852231 | orchestrator | 2026-03-10 00:58:22.852238 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-10 00:58:22.852244 | orchestrator | Tuesday 10 March 2026 00:49:22 +0000 (0:00:00.659) 0:02:53.673 ********* 2026-03-10 00:58:22.852250 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.852256 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.852277 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.852284 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.852290 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.852296 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.852302 | orchestrator | 2026-03-10 00:58:22.852308 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-10 00:58:22.852314 | orchestrator | Tuesday 10 March 2026 00:49:23 +0000 (0:00:01.454) 0:02:55.128 ********* 2026-03-10 00:58:22.852321 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.852327 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.852333 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.852339 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.852345 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.852351 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.852358 | orchestrator | 2026-03-10 00:58:22.852364 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-10 00:58:22.852370 | orchestrator | Tuesday 10 March 2026 00:49:24 +0000 (0:00:01.042) 0:02:56.171 ********* 2026-03-10 00:58:22.852376 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.852383 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.852389 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.852395 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.852401 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.852407 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.852413 | orchestrator | 2026-03-10 00:58:22.852425 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-10 00:58:22.852432 | orchestrator | Tuesday 10 March 2026 00:49:25 +0000 (0:00:01.359) 0:02:57.531 ********* 2026-03-10 00:58:22.852438 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.852448 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.852454 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.852461 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.852467 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.852473 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.852479 | orchestrator | 2026-03-10 00:58:22.852485 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-10 00:58:22.852492 | orchestrator | Tuesday 10 March 2026 00:49:26 +0000 (0:00:00.764) 0:02:58.296 ********* 2026-03-10 00:58:22.852498 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.852504 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.852510 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.852532 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.852539 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.852545 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.852551 | orchestrator | 2026-03-10 00:58:22.852557 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-10 00:58:22.852564 | orchestrator | Tuesday 10 March 2026 00:49:27 +0000 (0:00:01.172) 0:02:59.468 ********* 2026-03-10 00:58:22.852570 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.852576 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.852582 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.852588 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.852594 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.852600 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.852606 | orchestrator | 2026-03-10 00:58:22.852612 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-10 00:58:22.852619 | orchestrator | Tuesday 10 March 2026 00:49:28 +0000 (0:00:00.922) 0:03:00.390 ********* 2026-03-10 00:58:22.852625 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.852631 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.852637 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.852643 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.852649 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.852655 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.852661 | orchestrator | 2026-03-10 00:58:22.852668 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-10 00:58:22.852674 | orchestrator | Tuesday 10 March 2026 00:49:31 +0000 (0:00:02.242) 0:03:02.633 ********* 2026-03-10 00:58:22.852680 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:58:22.852686 | orchestrator | 2026-03-10 00:58:22.852692 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-10 00:58:22.852698 | orchestrator | Tuesday 10 March 2026 00:49:32 +0000 (0:00:01.171) 0:03:03.805 ********* 2026-03-10 00:58:22.852705 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-10 00:58:22.852711 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-10 00:58:22.852717 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-10 00:58:22.852723 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-10 00:58:22.852730 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-10 00:58:22.852736 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-10 00:58:22.852742 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-10 00:58:22.852749 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-10 00:58:22.852754 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-10 00:58:22.852761 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-10 00:58:22.852772 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-10 00:58:22.852779 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-10 00:58:22.852785 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-10 00:58:22.852792 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-10 00:58:22.852798 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-10 00:58:22.852804 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-10 00:58:22.852810 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-10 00:58:22.852816 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-10 00:58:22.852835 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-10 00:58:22.852842 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-10 00:58:22.852848 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-10 00:58:22.852854 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-10 00:58:22.852860 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-10 00:58:22.852866 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-10 00:58:22.852873 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-10 00:58:22.852879 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-10 00:58:22.852885 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-10 00:58:22.852891 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-10 00:58:22.852897 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-10 00:58:22.852903 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-10 00:58:22.852910 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-10 00:58:22.852916 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-10 00:58:22.852922 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-10 00:58:22.852928 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-10 00:58:22.852934 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-10 00:58:22.852940 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-10 00:58:22.852952 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-10 00:58:22.852964 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-10 00:58:22.852974 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-10 00:58:22.852983 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-10 00:58:22.852994 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-10 00:58:22.853005 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-10 00:58:22.853016 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-10 00:58:22.853027 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-10 00:58:22.853037 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-10 00:58:22.853049 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-10 00:58:22.853055 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-10 00:58:22.853061 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-10 00:58:22.853068 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-10 00:58:22.853074 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-10 00:58:22.853080 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-10 00:58:22.853086 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-10 00:58:22.853092 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-10 00:58:22.853104 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-10 00:58:22.853110 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-10 00:58:22.853116 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-10 00:58:22.853122 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-10 00:58:22.853129 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-10 00:58:22.853135 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-10 00:58:22.853141 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-10 00:58:22.853148 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-10 00:58:22.853154 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-10 00:58:22.853160 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-10 00:58:22.853166 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-10 00:58:22.853172 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-10 00:58:22.853178 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-10 00:58:22.853185 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-10 00:58:22.853191 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-10 00:58:22.853197 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-10 00:58:22.853203 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-10 00:58:22.853209 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-10 00:58:22.853215 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-10 00:58:22.853221 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-10 00:58:22.853228 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-10 00:58:22.853234 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-10 00:58:22.853241 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-10 00:58:22.853260 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-10 00:58:22.853268 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-10 00:58:22.853274 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-10 00:58:22.853281 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-10 00:58:22.853287 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-10 00:58:22.853293 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-10 00:58:22.853299 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-10 00:58:22.853306 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-10 00:58:22.853313 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-10 00:58:22.853319 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-10 00:58:22.853325 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-10 00:58:22.853331 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-10 00:58:22.853337 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-10 00:58:22.853344 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-10 00:58:22.853350 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-10 00:58:22.853356 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-10 00:58:22.853362 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-10 00:58:22.853378 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-10 00:58:22.853385 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-10 00:58:22.853391 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-10 00:58:22.853397 | orchestrator | 2026-03-10 00:58:22.853403 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-10 00:58:22.853409 | orchestrator | Tuesday 10 March 2026 00:49:39 +0000 (0:00:07.295) 0:03:11.100 ********* 2026-03-10 00:58:22.853415 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.853422 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.853428 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.853435 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:58:22.853441 | orchestrator | 2026-03-10 00:58:22.853448 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-10 00:58:22.853454 | orchestrator | Tuesday 10 March 2026 00:49:40 +0000 (0:00:01.078) 0:03:12.179 ********* 2026-03-10 00:58:22.853460 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-10 00:58:22.853467 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-10 00:58:22.853473 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-10 00:58:22.853480 | orchestrator | 2026-03-10 00:58:22.853486 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-10 00:58:22.853492 | orchestrator | Tuesday 10 March 2026 00:49:41 +0000 (0:00:01.009) 0:03:13.189 ********* 2026-03-10 00:58:22.853498 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-10 00:58:22.853504 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-10 00:58:22.853511 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-10 00:58:22.853558 | orchestrator | 2026-03-10 00:58:22.853567 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-10 00:58:22.853573 | orchestrator | Tuesday 10 March 2026 00:49:43 +0000 (0:00:01.416) 0:03:14.605 ********* 2026-03-10 00:58:22.853579 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.853585 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.853592 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.853598 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.853604 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.853611 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.853617 | orchestrator | 2026-03-10 00:58:22.853623 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-10 00:58:22.853629 | orchestrator | Tuesday 10 March 2026 00:49:44 +0000 (0:00:00.950) 0:03:15.555 ********* 2026-03-10 00:58:22.853636 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.853642 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.853648 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.853655 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.853661 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.853667 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.853673 | orchestrator | 2026-03-10 00:58:22.853679 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-10 00:58:22.853685 | orchestrator | Tuesday 10 March 2026 00:49:45 +0000 (0:00:01.002) 0:03:16.558 ********* 2026-03-10 00:58:22.853692 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.853698 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.853712 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.853719 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.853725 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.853731 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.853737 | orchestrator | 2026-03-10 00:58:22.853757 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-10 00:58:22.853764 | orchestrator | Tuesday 10 March 2026 00:49:45 +0000 (0:00:00.750) 0:03:17.309 ********* 2026-03-10 00:58:22.853770 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.853777 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.853783 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.853789 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.853796 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.853802 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.853808 | orchestrator | 2026-03-10 00:58:22.853814 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-10 00:58:22.853821 | orchestrator | Tuesday 10 March 2026 00:49:46 +0000 (0:00:00.747) 0:03:18.056 ********* 2026-03-10 00:58:22.853827 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.853833 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.853839 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.853845 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.853851 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.853857 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.853863 | orchestrator | 2026-03-10 00:58:22.853869 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-10 00:58:22.853876 | orchestrator | Tuesday 10 March 2026 00:49:47 +0000 (0:00:01.231) 0:03:19.288 ********* 2026-03-10 00:58:22.853882 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.853888 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.853895 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.853901 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.853907 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.853913 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.853919 | orchestrator | 2026-03-10 00:58:22.853929 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-10 00:58:22.853936 | orchestrator | Tuesday 10 March 2026 00:49:48 +0000 (0:00:01.135) 0:03:20.423 ********* 2026-03-10 00:58:22.853942 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.853950 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.853960 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.853970 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.853982 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.853992 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.854002 | orchestrator | 2026-03-10 00:58:22.854045 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-10 00:58:22.854055 | orchestrator | Tuesday 10 March 2026 00:49:49 +0000 (0:00:00.729) 0:03:21.152 ********* 2026-03-10 00:58:22.854062 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.854068 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.854074 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.854080 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.854086 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.854092 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.854099 | orchestrator | 2026-03-10 00:58:22.854105 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-10 00:58:22.854111 | orchestrator | Tuesday 10 March 2026 00:49:50 +0000 (0:00:01.096) 0:03:22.249 ********* 2026-03-10 00:58:22.854117 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.854124 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.854130 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.854142 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.854148 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.854154 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.854160 | orchestrator | 2026-03-10 00:58:22.854167 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-10 00:58:22.854173 | orchestrator | Tuesday 10 March 2026 00:49:53 +0000 (0:00:02.970) 0:03:25.219 ********* 2026-03-10 00:58:22.854179 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.854185 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.854191 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.854198 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.854204 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.854210 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.854216 | orchestrator | 2026-03-10 00:58:22.854223 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-10 00:58:22.854229 | orchestrator | Tuesday 10 March 2026 00:49:54 +0000 (0:00:00.892) 0:03:26.112 ********* 2026-03-10 00:58:22.854235 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.854241 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.854247 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.854254 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.854260 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.854266 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.854272 | orchestrator | 2026-03-10 00:58:22.854278 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-10 00:58:22.854284 | orchestrator | Tuesday 10 March 2026 00:49:55 +0000 (0:00:00.929) 0:03:27.041 ********* 2026-03-10 00:58:22.854291 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.854297 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.854303 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.854309 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.854315 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.854321 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.854328 | orchestrator | 2026-03-10 00:58:22.854334 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-10 00:58:22.854340 | orchestrator | Tuesday 10 March 2026 00:49:56 +0000 (0:00:01.014) 0:03:28.056 ********* 2026-03-10 00:58:22.854347 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-10 00:58:22.854353 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-10 00:58:22.854359 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-10 00:58:22.854366 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.854387 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.854394 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.854400 | orchestrator | 2026-03-10 00:58:22.854407 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-10 00:58:22.854413 | orchestrator | Tuesday 10 March 2026 00:49:57 +0000 (0:00:00.678) 0:03:28.734 ********* 2026-03-10 00:58:22.854421 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-10 00:58:22.854429 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-10 00:58:22.854437 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.854448 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-10 00:58:22.854459 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-10 00:58:22.854466 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-10 00:58:22.854472 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.854478 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-10 00:58:22.854485 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.854491 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.854497 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.854503 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.854510 | orchestrator | 2026-03-10 00:58:22.854531 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-10 00:58:22.854540 | orchestrator | Tuesday 10 March 2026 00:49:57 +0000 (0:00:00.794) 0:03:29.529 ********* 2026-03-10 00:58:22.854546 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.854553 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.854560 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.854566 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.854573 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.854580 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.854587 | orchestrator | 2026-03-10 00:58:22.854593 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-10 00:58:22.854600 | orchestrator | Tuesday 10 March 2026 00:49:58 +0000 (0:00:00.597) 0:03:30.127 ********* 2026-03-10 00:58:22.854607 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.854613 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.854620 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.854627 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.854634 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.854640 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.854647 | orchestrator | 2026-03-10 00:58:22.854654 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-10 00:58:22.854660 | orchestrator | Tuesday 10 March 2026 00:49:59 +0000 (0:00:00.827) 0:03:30.955 ********* 2026-03-10 00:58:22.854667 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.854674 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.854680 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.854687 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.854693 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.854700 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.854707 | orchestrator | 2026-03-10 00:58:22.854713 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-10 00:58:22.854720 | orchestrator | Tuesday 10 March 2026 00:50:00 +0000 (0:00:00.638) 0:03:31.594 ********* 2026-03-10 00:58:22.854727 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.854733 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.854740 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.854751 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.854758 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.854764 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.854771 | orchestrator | 2026-03-10 00:58:22.854778 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-10 00:58:22.854798 | orchestrator | Tuesday 10 March 2026 00:50:01 +0000 (0:00:01.104) 0:03:32.698 ********* 2026-03-10 00:58:22.854805 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.854812 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.854818 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.854825 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.854832 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.854839 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.854846 | orchestrator | 2026-03-10 00:58:22.854853 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-10 00:58:22.854860 | orchestrator | Tuesday 10 March 2026 00:50:01 +0000 (0:00:00.593) 0:03:33.292 ********* 2026-03-10 00:58:22.854866 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.854873 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.854880 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.854887 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.854894 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.854900 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.854907 | orchestrator | 2026-03-10 00:58:22.854914 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-10 00:58:22.854920 | orchestrator | Tuesday 10 March 2026 00:50:02 +0000 (0:00:01.083) 0:03:34.376 ********* 2026-03-10 00:58:22.854927 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-10 00:58:22.854934 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-10 00:58:22.854941 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-10 00:58:22.854950 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.854962 | orchestrator | 2026-03-10 00:58:22.854974 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-10 00:58:22.854992 | orchestrator | Tuesday 10 March 2026 00:50:03 +0000 (0:00:00.433) 0:03:34.809 ********* 2026-03-10 00:58:22.855004 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-10 00:58:22.855015 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-10 00:58:22.855026 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-10 00:58:22.855038 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.855051 | orchestrator | 2026-03-10 00:58:22.855063 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-10 00:58:22.855075 | orchestrator | Tuesday 10 March 2026 00:50:03 +0000 (0:00:00.379) 0:03:35.189 ********* 2026-03-10 00:58:22.855087 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-10 00:58:22.855099 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-10 00:58:22.855112 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-10 00:58:22.855124 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.855131 | orchestrator | 2026-03-10 00:58:22.855137 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-10 00:58:22.855144 | orchestrator | Tuesday 10 March 2026 00:50:04 +0000 (0:00:00.378) 0:03:35.567 ********* 2026-03-10 00:58:22.855151 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.855157 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.855164 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.855170 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.855177 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.855184 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.855191 | orchestrator | 2026-03-10 00:58:22.855198 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-10 00:58:22.855204 | orchestrator | Tuesday 10 March 2026 00:50:04 +0000 (0:00:00.629) 0:03:36.196 ********* 2026-03-10 00:58:22.855217 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-10 00:58:22.855224 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-10 00:58:22.855242 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-10 00:58:22.855249 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-10 00:58:22.855255 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.855262 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-10 00:58:22.855268 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.855275 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-10 00:58:22.855282 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.855289 | orchestrator | 2026-03-10 00:58:22.855295 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-10 00:58:22.855302 | orchestrator | Tuesday 10 March 2026 00:50:07 +0000 (0:00:02.866) 0:03:39.063 ********* 2026-03-10 00:58:22.855309 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:58:22.855315 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:58:22.855322 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:58:22.855328 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:58:22.855335 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:58:22.855341 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:58:22.855348 | orchestrator | 2026-03-10 00:58:22.855355 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-10 00:58:22.855361 | orchestrator | Tuesday 10 March 2026 00:50:10 +0000 (0:00:02.885) 0:03:41.948 ********* 2026-03-10 00:58:22.855389 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:58:22.855396 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:58:22.855402 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:58:22.855409 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:58:22.855415 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:58:22.855422 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:58:22.855428 | orchestrator | 2026-03-10 00:58:22.855435 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-10 00:58:22.855442 | orchestrator | Tuesday 10 March 2026 00:50:11 +0000 (0:00:00.948) 0:03:42.896 ********* 2026-03-10 00:58:22.855448 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.855455 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.855462 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.855469 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:58:22.855475 | orchestrator | 2026-03-10 00:58:22.855482 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-10 00:58:22.855504 | orchestrator | Tuesday 10 March 2026 00:50:12 +0000 (0:00:01.359) 0:03:44.256 ********* 2026-03-10 00:58:22.855511 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.855659 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.855688 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.855695 | orchestrator | 2026-03-10 00:58:22.855702 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-10 00:58:22.855710 | orchestrator | Tuesday 10 March 2026 00:50:13 +0000 (0:00:00.520) 0:03:44.777 ********* 2026-03-10 00:58:22.855716 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:58:22.855723 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:58:22.855730 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:58:22.855736 | orchestrator | 2026-03-10 00:58:22.855743 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-10 00:58:22.855750 | orchestrator | Tuesday 10 March 2026 00:50:14 +0000 (0:00:01.109) 0:03:45.887 ********* 2026-03-10 00:58:22.855757 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-10 00:58:22.855763 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-10 00:58:22.855770 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-10 00:58:22.855776 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.855790 | orchestrator | 2026-03-10 00:58:22.855796 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-10 00:58:22.855802 | orchestrator | Tuesday 10 March 2026 00:50:15 +0000 (0:00:01.043) 0:03:46.930 ********* 2026-03-10 00:58:22.855808 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.855815 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.855821 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.855827 | orchestrator | 2026-03-10 00:58:22.855833 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-10 00:58:22.855844 | orchestrator | Tuesday 10 March 2026 00:50:15 +0000 (0:00:00.349) 0:03:47.280 ********* 2026-03-10 00:58:22.855850 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.855857 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.855863 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.855869 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:58:22.855875 | orchestrator | 2026-03-10 00:58:22.855882 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-10 00:58:22.855888 | orchestrator | Tuesday 10 March 2026 00:50:16 +0000 (0:00:00.971) 0:03:48.251 ********* 2026-03-10 00:58:22.855894 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-10 00:58:22.855901 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-10 00:58:22.855907 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-10 00:58:22.855913 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.855919 | orchestrator | 2026-03-10 00:58:22.855925 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-10 00:58:22.855932 | orchestrator | Tuesday 10 March 2026 00:50:17 +0000 (0:00:00.386) 0:03:48.638 ********* 2026-03-10 00:58:22.855938 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.855944 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.855961 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.855968 | orchestrator | 2026-03-10 00:58:22.855982 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-10 00:58:22.855988 | orchestrator | Tuesday 10 March 2026 00:50:17 +0000 (0:00:00.314) 0:03:48.952 ********* 2026-03-10 00:58:22.855994 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.856002 | orchestrator | 2026-03-10 00:58:22.856013 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-10 00:58:22.856022 | orchestrator | Tuesday 10 March 2026 00:50:17 +0000 (0:00:00.224) 0:03:49.178 ********* 2026-03-10 00:58:22.856028 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.856034 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.856040 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.856047 | orchestrator | 2026-03-10 00:58:22.856053 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-10 00:58:22.856059 | orchestrator | Tuesday 10 March 2026 00:50:17 +0000 (0:00:00.321) 0:03:49.500 ********* 2026-03-10 00:58:22.856066 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.856072 | orchestrator | 2026-03-10 00:58:22.856078 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-10 00:58:22.856084 | orchestrator | Tuesday 10 March 2026 00:50:18 +0000 (0:00:00.232) 0:03:49.733 ********* 2026-03-10 00:58:22.856090 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.856096 | orchestrator | 2026-03-10 00:58:22.856103 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-10 00:58:22.856109 | orchestrator | Tuesday 10 March 2026 00:50:18 +0000 (0:00:00.204) 0:03:49.938 ********* 2026-03-10 00:58:22.856115 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.856121 | orchestrator | 2026-03-10 00:58:22.856128 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-10 00:58:22.856134 | orchestrator | Tuesday 10 March 2026 00:50:18 +0000 (0:00:00.119) 0:03:50.057 ********* 2026-03-10 00:58:22.856140 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.856150 | orchestrator | 2026-03-10 00:58:22.856156 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-10 00:58:22.856162 | orchestrator | Tuesday 10 March 2026 00:50:19 +0000 (0:00:00.622) 0:03:50.679 ********* 2026-03-10 00:58:22.856168 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.856175 | orchestrator | 2026-03-10 00:58:22.856181 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-10 00:58:22.856187 | orchestrator | Tuesday 10 March 2026 00:50:19 +0000 (0:00:00.210) 0:03:50.890 ********* 2026-03-10 00:58:22.856193 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-10 00:58:22.856200 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-10 00:58:22.856206 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-10 00:58:22.856212 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.856218 | orchestrator | 2026-03-10 00:58:22.856225 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-10 00:58:22.856249 | orchestrator | Tuesday 10 March 2026 00:50:19 +0000 (0:00:00.482) 0:03:51.373 ********* 2026-03-10 00:58:22.856257 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.856263 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.856269 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.856275 | orchestrator | 2026-03-10 00:58:22.856282 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-10 00:58:22.856288 | orchestrator | Tuesday 10 March 2026 00:50:20 +0000 (0:00:00.354) 0:03:51.727 ********* 2026-03-10 00:58:22.856294 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.856300 | orchestrator | 2026-03-10 00:58:22.856307 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-10 00:58:22.856313 | orchestrator | Tuesday 10 March 2026 00:50:20 +0000 (0:00:00.215) 0:03:51.943 ********* 2026-03-10 00:58:22.856319 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.856325 | orchestrator | 2026-03-10 00:58:22.856331 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-10 00:58:22.856338 | orchestrator | Tuesday 10 March 2026 00:50:20 +0000 (0:00:00.216) 0:03:52.160 ********* 2026-03-10 00:58:22.856344 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.856350 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.856356 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.856363 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:58:22.856369 | orchestrator | 2026-03-10 00:58:22.856375 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-10 00:58:22.856381 | orchestrator | Tuesday 10 March 2026 00:50:21 +0000 (0:00:01.014) 0:03:53.174 ********* 2026-03-10 00:58:22.856392 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.856398 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.856404 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.856410 | orchestrator | 2026-03-10 00:58:22.856417 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-10 00:58:22.856423 | orchestrator | Tuesday 10 March 2026 00:50:21 +0000 (0:00:00.311) 0:03:53.486 ********* 2026-03-10 00:58:22.856430 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:58:22.856436 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:58:22.856442 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:58:22.856448 | orchestrator | 2026-03-10 00:58:22.856454 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-10 00:58:22.856460 | orchestrator | Tuesday 10 March 2026 00:50:23 +0000 (0:00:01.347) 0:03:54.833 ********* 2026-03-10 00:58:22.856467 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-10 00:58:22.856473 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-10 00:58:22.856479 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-10 00:58:22.856485 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.856497 | orchestrator | 2026-03-10 00:58:22.856504 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-10 00:58:22.856510 | orchestrator | Tuesday 10 March 2026 00:50:24 +0000 (0:00:01.194) 0:03:56.027 ********* 2026-03-10 00:58:22.856530 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.856537 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.856543 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.856549 | orchestrator | 2026-03-10 00:58:22.856556 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-10 00:58:22.856562 | orchestrator | Tuesday 10 March 2026 00:50:25 +0000 (0:00:00.953) 0:03:56.981 ********* 2026-03-10 00:58:22.856568 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.856574 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.856581 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.856587 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:58:22.856593 | orchestrator | 2026-03-10 00:58:22.856599 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-10 00:58:22.856605 | orchestrator | Tuesday 10 March 2026 00:50:26 +0000 (0:00:01.290) 0:03:58.272 ********* 2026-03-10 00:58:22.856612 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.856618 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.856625 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.856631 | orchestrator | 2026-03-10 00:58:22.856637 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-10 00:58:22.856643 | orchestrator | Tuesday 10 March 2026 00:50:27 +0000 (0:00:00.729) 0:03:59.002 ********* 2026-03-10 00:58:22.856650 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:58:22.856656 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:58:22.856662 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:58:22.856668 | orchestrator | 2026-03-10 00:58:22.856674 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-10 00:58:22.856681 | orchestrator | Tuesday 10 March 2026 00:50:28 +0000 (0:00:01.444) 0:04:00.446 ********* 2026-03-10 00:58:22.856687 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-10 00:58:22.856693 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-10 00:58:22.856700 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-10 00:58:22.856706 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.856713 | orchestrator | 2026-03-10 00:58:22.856719 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-10 00:58:22.856725 | orchestrator | Tuesday 10 March 2026 00:50:29 +0000 (0:00:00.654) 0:04:01.101 ********* 2026-03-10 00:58:22.856731 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.856737 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.856744 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.856750 | orchestrator | 2026-03-10 00:58:22.856756 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-10 00:58:22.856762 | orchestrator | Tuesday 10 March 2026 00:50:29 +0000 (0:00:00.416) 0:04:01.518 ********* 2026-03-10 00:58:22.856769 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.856775 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.856781 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.856788 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.856794 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.856811 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.856818 | orchestrator | 2026-03-10 00:58:22.856824 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-10 00:58:22.856830 | orchestrator | Tuesday 10 March 2026 00:50:31 +0000 (0:00:01.040) 0:04:02.558 ********* 2026-03-10 00:58:22.856837 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.856843 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.856850 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.856861 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:58:22.856868 | orchestrator | 2026-03-10 00:58:22.856874 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-10 00:58:22.856880 | orchestrator | Tuesday 10 March 2026 00:50:32 +0000 (0:00:00.998) 0:04:03.556 ********* 2026-03-10 00:58:22.856886 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.856893 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.856899 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.856905 | orchestrator | 2026-03-10 00:58:22.856911 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-10 00:58:22.856918 | orchestrator | Tuesday 10 March 2026 00:50:32 +0000 (0:00:00.633) 0:04:04.189 ********* 2026-03-10 00:58:22.856924 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:58:22.856930 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:58:22.856936 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:58:22.856942 | orchestrator | 2026-03-10 00:58:22.856948 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-10 00:58:22.856954 | orchestrator | Tuesday 10 March 2026 00:50:34 +0000 (0:00:01.374) 0:04:05.563 ********* 2026-03-10 00:58:22.856965 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-10 00:58:22.856971 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-10 00:58:22.856977 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-10 00:58:22.856984 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.856990 | orchestrator | 2026-03-10 00:58:22.856996 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-10 00:58:22.857002 | orchestrator | Tuesday 10 March 2026 00:50:34 +0000 (0:00:00.673) 0:04:06.237 ********* 2026-03-10 00:58:22.857009 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.857015 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.857021 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.857027 | orchestrator | 2026-03-10 00:58:22.857037 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-10 00:58:22.857046 | orchestrator | 2026-03-10 00:58:22.857053 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-10 00:58:22.857059 | orchestrator | Tuesday 10 March 2026 00:50:35 +0000 (0:00:01.067) 0:04:07.305 ********* 2026-03-10 00:58:22.857066 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:58:22.857073 | orchestrator | 2026-03-10 00:58:22.857079 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-10 00:58:22.857085 | orchestrator | Tuesday 10 March 2026 00:50:36 +0000 (0:00:00.920) 0:04:08.225 ********* 2026-03-10 00:58:22.857092 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:58:22.857098 | orchestrator | 2026-03-10 00:58:22.857104 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-10 00:58:22.857110 | orchestrator | Tuesday 10 March 2026 00:50:37 +0000 (0:00:00.651) 0:04:08.876 ********* 2026-03-10 00:58:22.857116 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.857123 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.857129 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.857135 | orchestrator | 2026-03-10 00:58:22.857141 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-10 00:58:22.857148 | orchestrator | Tuesday 10 March 2026 00:50:38 +0000 (0:00:01.151) 0:04:10.027 ********* 2026-03-10 00:58:22.857154 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.857160 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.857166 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.857172 | orchestrator | 2026-03-10 00:58:22.857178 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-10 00:58:22.857185 | orchestrator | Tuesday 10 March 2026 00:50:38 +0000 (0:00:00.367) 0:04:10.395 ********* 2026-03-10 00:58:22.857198 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.857205 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.857211 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.857218 | orchestrator | 2026-03-10 00:58:22.857224 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-10 00:58:22.857230 | orchestrator | Tuesday 10 March 2026 00:50:39 +0000 (0:00:00.474) 0:04:10.869 ********* 2026-03-10 00:58:22.857236 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.857242 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.857249 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.857255 | orchestrator | 2026-03-10 00:58:22.857261 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-10 00:58:22.857267 | orchestrator | Tuesday 10 March 2026 00:50:39 +0000 (0:00:00.485) 0:04:11.355 ********* 2026-03-10 00:58:22.857273 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.857280 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.857286 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.857292 | orchestrator | 2026-03-10 00:58:22.857298 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-10 00:58:22.857304 | orchestrator | Tuesday 10 March 2026 00:50:41 +0000 (0:00:01.424) 0:04:12.780 ********* 2026-03-10 00:58:22.857311 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.857317 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.857323 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.857329 | orchestrator | 2026-03-10 00:58:22.857336 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-10 00:58:22.857342 | orchestrator | Tuesday 10 March 2026 00:50:41 +0000 (0:00:00.397) 0:04:13.178 ********* 2026-03-10 00:58:22.857363 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.857370 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.857376 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.857382 | orchestrator | 2026-03-10 00:58:22.857388 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-10 00:58:22.857395 | orchestrator | Tuesday 10 March 2026 00:50:42 +0000 (0:00:00.435) 0:04:13.614 ********* 2026-03-10 00:58:22.857401 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.857407 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.857413 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.857420 | orchestrator | 2026-03-10 00:58:22.857426 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-10 00:58:22.857432 | orchestrator | Tuesday 10 March 2026 00:50:42 +0000 (0:00:00.874) 0:04:14.488 ********* 2026-03-10 00:58:22.857439 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.857445 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.857451 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.857457 | orchestrator | 2026-03-10 00:58:22.857463 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-10 00:58:22.857470 | orchestrator | Tuesday 10 March 2026 00:50:44 +0000 (0:00:01.226) 0:04:15.714 ********* 2026-03-10 00:58:22.857476 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.857482 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.857488 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.857494 | orchestrator | 2026-03-10 00:58:22.857500 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-10 00:58:22.857506 | orchestrator | Tuesday 10 March 2026 00:50:44 +0000 (0:00:00.440) 0:04:16.155 ********* 2026-03-10 00:58:22.857513 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.857532 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.857543 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.857549 | orchestrator | 2026-03-10 00:58:22.857555 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-10 00:58:22.857562 | orchestrator | Tuesday 10 March 2026 00:50:45 +0000 (0:00:00.493) 0:04:16.649 ********* 2026-03-10 00:58:22.857568 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.857579 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.857585 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.857591 | orchestrator | 2026-03-10 00:58:22.857597 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-10 00:58:22.857603 | orchestrator | Tuesday 10 March 2026 00:50:45 +0000 (0:00:00.418) 0:04:17.068 ********* 2026-03-10 00:58:22.857609 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.857616 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.857622 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.857628 | orchestrator | 2026-03-10 00:58:22.857634 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-10 00:58:22.857641 | orchestrator | Tuesday 10 March 2026 00:50:45 +0000 (0:00:00.387) 0:04:17.455 ********* 2026-03-10 00:58:22.857647 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.857653 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.857659 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.857665 | orchestrator | 2026-03-10 00:58:22.857671 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-10 00:58:22.857678 | orchestrator | Tuesday 10 March 2026 00:50:46 +0000 (0:00:00.720) 0:04:18.176 ********* 2026-03-10 00:58:22.857684 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.857690 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.857696 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.857703 | orchestrator | 2026-03-10 00:58:22.857709 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-10 00:58:22.857715 | orchestrator | Tuesday 10 March 2026 00:50:47 +0000 (0:00:00.454) 0:04:18.631 ********* 2026-03-10 00:58:22.857721 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.857728 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.857734 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.857740 | orchestrator | 2026-03-10 00:58:22.857746 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-10 00:58:22.857752 | orchestrator | Tuesday 10 March 2026 00:50:47 +0000 (0:00:00.386) 0:04:19.017 ********* 2026-03-10 00:58:22.857758 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.857765 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.857771 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.857777 | orchestrator | 2026-03-10 00:58:22.857783 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-10 00:58:22.857789 | orchestrator | Tuesday 10 March 2026 00:50:47 +0000 (0:00:00.400) 0:04:19.417 ********* 2026-03-10 00:58:22.857795 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.857802 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.857808 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.857814 | orchestrator | 2026-03-10 00:58:22.857820 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-10 00:58:22.857826 | orchestrator | Tuesday 10 March 2026 00:50:48 +0000 (0:00:00.548) 0:04:19.966 ********* 2026-03-10 00:58:22.857832 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.857838 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.857844 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.857851 | orchestrator | 2026-03-10 00:58:22.857857 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-10 00:58:22.857863 | orchestrator | Tuesday 10 March 2026 00:50:49 +0000 (0:00:00.743) 0:04:20.710 ********* 2026-03-10 00:58:22.857869 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.857875 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.857882 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.857888 | orchestrator | 2026-03-10 00:58:22.857894 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-10 00:58:22.857900 | orchestrator | Tuesday 10 March 2026 00:50:49 +0000 (0:00:00.440) 0:04:21.150 ********* 2026-03-10 00:58:22.857906 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:58:22.857913 | orchestrator | 2026-03-10 00:58:22.857922 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-10 00:58:22.857929 | orchestrator | Tuesday 10 March 2026 00:50:50 +0000 (0:00:00.814) 0:04:21.965 ********* 2026-03-10 00:58:22.857935 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.857941 | orchestrator | 2026-03-10 00:58:22.857961 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-10 00:58:22.857968 | orchestrator | Tuesday 10 March 2026 00:50:50 +0000 (0:00:00.147) 0:04:22.113 ********* 2026-03-10 00:58:22.857974 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-10 00:58:22.857980 | orchestrator | 2026-03-10 00:58:22.857986 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-10 00:58:22.857993 | orchestrator | Tuesday 10 March 2026 00:50:51 +0000 (0:00:01.140) 0:04:23.254 ********* 2026-03-10 00:58:22.857999 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.858005 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.858011 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.858047 | orchestrator | 2026-03-10 00:58:22.858054 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-10 00:58:22.858060 | orchestrator | Tuesday 10 March 2026 00:50:52 +0000 (0:00:00.537) 0:04:23.792 ********* 2026-03-10 00:58:22.858066 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.858073 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.858079 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.858085 | orchestrator | 2026-03-10 00:58:22.858091 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-10 00:58:22.858097 | orchestrator | Tuesday 10 March 2026 00:50:52 +0000 (0:00:00.446) 0:04:24.238 ********* 2026-03-10 00:58:22.858104 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:58:22.858110 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:58:22.858116 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:58:22.858122 | orchestrator | 2026-03-10 00:58:22.858129 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-10 00:58:22.858135 | orchestrator | Tuesday 10 March 2026 00:50:54 +0000 (0:00:01.537) 0:04:25.775 ********* 2026-03-10 00:58:22.858145 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:58:22.858151 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:58:22.858157 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:58:22.858164 | orchestrator | 2026-03-10 00:58:22.858170 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-10 00:58:22.858176 | orchestrator | Tuesday 10 March 2026 00:50:55 +0000 (0:00:00.827) 0:04:26.602 ********* 2026-03-10 00:58:22.858182 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:58:22.858189 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:58:22.858195 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:58:22.858201 | orchestrator | 2026-03-10 00:58:22.858207 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-10 00:58:22.858213 | orchestrator | Tuesday 10 March 2026 00:50:55 +0000 (0:00:00.746) 0:04:27.349 ********* 2026-03-10 00:58:22.858220 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.858226 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.858232 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.858238 | orchestrator | 2026-03-10 00:58:22.858244 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-10 00:58:22.858251 | orchestrator | Tuesday 10 March 2026 00:50:56 +0000 (0:00:00.650) 0:04:27.999 ********* 2026-03-10 00:58:22.858257 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:58:22.858263 | orchestrator | 2026-03-10 00:58:22.858269 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-10 00:58:22.858275 | orchestrator | Tuesday 10 March 2026 00:50:57 +0000 (0:00:01.452) 0:04:29.452 ********* 2026-03-10 00:58:22.858282 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.858288 | orchestrator | 2026-03-10 00:58:22.858294 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-10 00:58:22.858300 | orchestrator | Tuesday 10 March 2026 00:50:59 +0000 (0:00:01.471) 0:04:30.923 ********* 2026-03-10 00:58:22.858311 | orchestrator | changed: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 00:58:22.858317 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-10 00:58:22.858324 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 00:58:22.858330 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-10 00:58:22.858337 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-10 00:58:22.858343 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-10 00:58:22.858349 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-10 00:58:22.858356 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-10 00:58:22.858362 | orchestrator | changed: [testbed-node-2 -> {{ item }}] 2026-03-10 00:58:22.858368 | orchestrator | changed: [testbed-node-1 -> {{ item }}] 2026-03-10 00:58:22.858375 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-10 00:58:22.858381 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-03-10 00:58:22.858387 | orchestrator | 2026-03-10 00:58:22.858393 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-10 00:58:22.858399 | orchestrator | Tuesday 10 March 2026 00:51:05 +0000 (0:00:05.914) 0:04:36.838 ********* 2026-03-10 00:58:22.858405 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:58:22.858412 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:58:22.858418 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:58:22.858424 | orchestrator | 2026-03-10 00:58:22.858430 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-10 00:58:22.858437 | orchestrator | Tuesday 10 March 2026 00:51:07 +0000 (0:00:01.796) 0:04:38.634 ********* 2026-03-10 00:58:22.858443 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.858449 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.858456 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.858462 | orchestrator | 2026-03-10 00:58:22.858468 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-10 00:58:22.858474 | orchestrator | Tuesday 10 March 2026 00:51:07 +0000 (0:00:00.417) 0:04:39.052 ********* 2026-03-10 00:58:22.858480 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.858487 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.858493 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.858499 | orchestrator | 2026-03-10 00:58:22.858505 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-10 00:58:22.858511 | orchestrator | Tuesday 10 March 2026 00:51:08 +0000 (0:00:00.537) 0:04:39.589 ********* 2026-03-10 00:58:22.858537 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:58:22.858559 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:58:22.858566 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:58:22.858572 | orchestrator | 2026-03-10 00:58:22.858579 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-10 00:58:22.858585 | orchestrator | Tuesday 10 March 2026 00:51:10 +0000 (0:00:01.996) 0:04:41.586 ********* 2026-03-10 00:58:22.858591 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:58:22.858597 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:58:22.858604 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:58:22.858610 | orchestrator | 2026-03-10 00:58:22.858616 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-10 00:58:22.858622 | orchestrator | Tuesday 10 March 2026 00:51:11 +0000 (0:00:01.413) 0:04:43.000 ********* 2026-03-10 00:58:22.858628 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.858635 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.858641 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.858647 | orchestrator | 2026-03-10 00:58:22.858653 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-10 00:58:22.858660 | orchestrator | Tuesday 10 March 2026 00:51:11 +0000 (0:00:00.295) 0:04:43.295 ********* 2026-03-10 00:58:22.858671 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:58:22.858677 | orchestrator | 2026-03-10 00:58:22.858683 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-10 00:58:22.858690 | orchestrator | Tuesday 10 March 2026 00:51:12 +0000 (0:00:00.696) 0:04:43.992 ********* 2026-03-10 00:58:22.858696 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.858706 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.858712 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.858718 | orchestrator | 2026-03-10 00:58:22.858724 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-10 00:58:22.858730 | orchestrator | Tuesday 10 March 2026 00:51:12 +0000 (0:00:00.343) 0:04:44.335 ********* 2026-03-10 00:58:22.858737 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.858743 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.858749 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.858755 | orchestrator | 2026-03-10 00:58:22.858761 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-10 00:58:22.858768 | orchestrator | Tuesday 10 March 2026 00:51:13 +0000 (0:00:00.386) 0:04:44.722 ********* 2026-03-10 00:58:22.858774 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:58:22.858780 | orchestrator | 2026-03-10 00:58:22.858786 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-10 00:58:22.858792 | orchestrator | Tuesday 10 March 2026 00:51:13 +0000 (0:00:00.829) 0:04:45.551 ********* 2026-03-10 00:58:22.858798 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:58:22.858805 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:58:22.858811 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:58:22.858817 | orchestrator | 2026-03-10 00:58:22.858823 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-10 00:58:22.858829 | orchestrator | Tuesday 10 March 2026 00:51:16 +0000 (0:00:02.510) 0:04:48.062 ********* 2026-03-10 00:58:22.858835 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:58:22.858842 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:58:22.858848 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:58:22.858854 | orchestrator | 2026-03-10 00:58:22.858860 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-10 00:58:22.858866 | orchestrator | Tuesday 10 March 2026 00:51:18 +0000 (0:00:02.014) 0:04:50.076 ********* 2026-03-10 00:58:22.858873 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:58:22.858879 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:58:22.858885 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:58:22.858891 | orchestrator | 2026-03-10 00:58:22.858897 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-10 00:58:22.858904 | orchestrator | Tuesday 10 March 2026 00:51:21 +0000 (0:00:02.544) 0:04:52.620 ********* 2026-03-10 00:58:22.858910 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:58:22.858916 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:58:22.858922 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:58:22.858928 | orchestrator | 2026-03-10 00:58:22.858934 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-10 00:58:22.858940 | orchestrator | Tuesday 10 March 2026 00:51:23 +0000 (0:00:02.671) 0:04:55.292 ********* 2026-03-10 00:58:22.858947 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:58:22.858953 | orchestrator | 2026-03-10 00:58:22.858959 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-10 00:58:22.858965 | orchestrator | Tuesday 10 March 2026 00:51:24 +0000 (0:00:00.637) 0:04:55.929 ********* 2026-03-10 00:58:22.858971 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.858978 | orchestrator | 2026-03-10 00:58:22.858984 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-10 00:58:22.858995 | orchestrator | Tuesday 10 March 2026 00:51:25 +0000 (0:00:01.413) 0:04:57.343 ********* 2026-03-10 00:58:22.859001 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.859007 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.859013 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.859019 | orchestrator | 2026-03-10 00:58:22.859026 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-10 00:58:22.859032 | orchestrator | Tuesday 10 March 2026 00:51:36 +0000 (0:00:10.272) 0:05:07.615 ********* 2026-03-10 00:58:22.859038 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.859044 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.859050 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.859056 | orchestrator | 2026-03-10 00:58:22.859063 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-10 00:58:22.859069 | orchestrator | Tuesday 10 March 2026 00:51:36 +0000 (0:00:00.630) 0:05:08.246 ********* 2026-03-10 00:58:22.859090 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1f3eb02cc9854b70b5a95e6adb4975d73a32b880'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-10 00:58:22.859100 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1f3eb02cc9854b70b5a95e6adb4975d73a32b880'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-10 00:58:22.859108 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1f3eb02cc9854b70b5a95e6adb4975d73a32b880'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-10 00:58:22.859120 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1f3eb02cc9854b70b5a95e6adb4975d73a32b880'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-10 00:58:22.859129 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1f3eb02cc9854b70b5a95e6adb4975d73a32b880'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-10 00:58:22.859140 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1f3eb02cc9854b70b5a95e6adb4975d73a32b880'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__1f3eb02cc9854b70b5a95e6adb4975d73a32b880'}])  2026-03-10 00:58:22.859153 | orchestrator | 2026-03-10 00:58:22.859164 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-10 00:58:22.859174 | orchestrator | Tuesday 10 March 2026 00:51:53 +0000 (0:00:17.219) 0:05:25.466 ********* 2026-03-10 00:58:22.859184 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.859194 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.859205 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.859216 | orchestrator | 2026-03-10 00:58:22.859227 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-10 00:58:22.859239 | orchestrator | Tuesday 10 March 2026 00:51:54 +0000 (0:00:00.417) 0:05:25.883 ********* 2026-03-10 00:58:22.859245 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:58:22.859251 | orchestrator | 2026-03-10 00:58:22.859258 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-10 00:58:22.859264 | orchestrator | Tuesday 10 March 2026 00:51:55 +0000 (0:00:00.965) 0:05:26.848 ********* 2026-03-10 00:58:22.859270 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.859276 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.859282 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.859289 | orchestrator | 2026-03-10 00:58:22.859295 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-10 00:58:22.859301 | orchestrator | Tuesday 10 March 2026 00:51:55 +0000 (0:00:00.617) 0:05:27.466 ********* 2026-03-10 00:58:22.859307 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.859313 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.859320 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.859326 | orchestrator | 2026-03-10 00:58:22.859332 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-10 00:58:22.859339 | orchestrator | Tuesday 10 March 2026 00:51:56 +0000 (0:00:00.364) 0:05:27.831 ********* 2026-03-10 00:58:22.859345 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-10 00:58:22.859351 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-10 00:58:22.859357 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-10 00:58:22.859364 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.859370 | orchestrator | 2026-03-10 00:58:22.859376 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-10 00:58:22.859382 | orchestrator | Tuesday 10 March 2026 00:51:57 +0000 (0:00:01.107) 0:05:28.938 ********* 2026-03-10 00:58:22.859389 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.859395 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.859401 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.859407 | orchestrator | 2026-03-10 00:58:22.859413 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-10 00:58:22.859420 | orchestrator | 2026-03-10 00:58:22.859447 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-10 00:58:22.859454 | orchestrator | Tuesday 10 March 2026 00:51:58 +0000 (0:00:01.045) 0:05:29.984 ********* 2026-03-10 00:58:22.859461 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:58:22.859467 | orchestrator | 2026-03-10 00:58:22.859473 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-10 00:58:22.859479 | orchestrator | Tuesday 10 March 2026 00:51:59 +0000 (0:00:00.601) 0:05:30.586 ********* 2026-03-10 00:58:22.859485 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:58:22.859492 | orchestrator | 2026-03-10 00:58:22.859498 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-10 00:58:22.859504 | orchestrator | Tuesday 10 March 2026 00:51:59 +0000 (0:00:00.886) 0:05:31.472 ********* 2026-03-10 00:58:22.859510 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.859542 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.859550 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.859556 | orchestrator | 2026-03-10 00:58:22.859562 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-10 00:58:22.859568 | orchestrator | Tuesday 10 March 2026 00:52:00 +0000 (0:00:00.784) 0:05:32.257 ********* 2026-03-10 00:58:22.859574 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.859581 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.859587 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.859593 | orchestrator | 2026-03-10 00:58:22.859599 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-10 00:58:22.859612 | orchestrator | Tuesday 10 March 2026 00:52:01 +0000 (0:00:00.361) 0:05:32.619 ********* 2026-03-10 00:58:22.859618 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.859624 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.859630 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.859636 | orchestrator | 2026-03-10 00:58:22.859643 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-10 00:58:22.859649 | orchestrator | Tuesday 10 March 2026 00:52:01 +0000 (0:00:00.686) 0:05:33.305 ********* 2026-03-10 00:58:22.859655 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.859661 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.859667 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.859674 | orchestrator | 2026-03-10 00:58:22.859680 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-10 00:58:22.859686 | orchestrator | Tuesday 10 March 2026 00:52:02 +0000 (0:00:00.349) 0:05:33.655 ********* 2026-03-10 00:58:22.859692 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.859698 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.859704 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.859711 | orchestrator | 2026-03-10 00:58:22.859717 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-10 00:58:22.859723 | orchestrator | Tuesday 10 March 2026 00:52:02 +0000 (0:00:00.770) 0:05:34.425 ********* 2026-03-10 00:58:22.859729 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.859735 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.859741 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.859748 | orchestrator | 2026-03-10 00:58:22.859754 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-10 00:58:22.859760 | orchestrator | Tuesday 10 March 2026 00:52:03 +0000 (0:00:00.324) 0:05:34.750 ********* 2026-03-10 00:58:22.859766 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.859773 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.859779 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.859785 | orchestrator | 2026-03-10 00:58:22.859791 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-10 00:58:22.859797 | orchestrator | Tuesday 10 March 2026 00:52:03 +0000 (0:00:00.592) 0:05:35.342 ********* 2026-03-10 00:58:22.859803 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.859809 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.859816 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.859822 | orchestrator | 2026-03-10 00:58:22.859828 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-10 00:58:22.859834 | orchestrator | Tuesday 10 March 2026 00:52:04 +0000 (0:00:00.746) 0:05:36.089 ********* 2026-03-10 00:58:22.859840 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.859846 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.859853 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.859859 | orchestrator | 2026-03-10 00:58:22.859865 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-10 00:58:22.859871 | orchestrator | Tuesday 10 March 2026 00:52:05 +0000 (0:00:00.807) 0:05:36.896 ********* 2026-03-10 00:58:22.859877 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.859883 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.859890 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.859896 | orchestrator | 2026-03-10 00:58:22.859902 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-10 00:58:22.859966 | orchestrator | Tuesday 10 March 2026 00:52:05 +0000 (0:00:00.325) 0:05:37.222 ********* 2026-03-10 00:58:22.859978 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.859984 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.859991 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.859997 | orchestrator | 2026-03-10 00:58:22.860003 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-10 00:58:22.860009 | orchestrator | Tuesday 10 March 2026 00:52:06 +0000 (0:00:00.369) 0:05:37.591 ********* 2026-03-10 00:58:22.860020 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.860026 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.860032 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.860038 | orchestrator | 2026-03-10 00:58:22.860045 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-10 00:58:22.860051 | orchestrator | Tuesday 10 March 2026 00:52:06 +0000 (0:00:00.662) 0:05:38.254 ********* 2026-03-10 00:58:22.860057 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.860063 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.860087 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.860094 | orchestrator | 2026-03-10 00:58:22.860100 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-10 00:58:22.860106 | orchestrator | Tuesday 10 March 2026 00:52:07 +0000 (0:00:00.371) 0:05:38.625 ********* 2026-03-10 00:58:22.860112 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.860118 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.860125 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.860131 | orchestrator | 2026-03-10 00:58:22.860137 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-10 00:58:22.860143 | orchestrator | Tuesday 10 March 2026 00:52:07 +0000 (0:00:00.349) 0:05:38.975 ********* 2026-03-10 00:58:22.860149 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.860155 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.860161 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.860168 | orchestrator | 2026-03-10 00:58:22.860174 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-10 00:58:22.860180 | orchestrator | Tuesday 10 March 2026 00:52:07 +0000 (0:00:00.373) 0:05:39.349 ********* 2026-03-10 00:58:22.860186 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.860192 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.860198 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.860204 | orchestrator | 2026-03-10 00:58:22.860211 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-10 00:58:22.860217 | orchestrator | Tuesday 10 March 2026 00:52:08 +0000 (0:00:00.662) 0:05:40.011 ********* 2026-03-10 00:58:22.860223 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.860229 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.860235 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.860242 | orchestrator | 2026-03-10 00:58:22.860251 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-10 00:58:22.860258 | orchestrator | Tuesday 10 March 2026 00:52:08 +0000 (0:00:00.378) 0:05:40.389 ********* 2026-03-10 00:58:22.860264 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.860270 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.860276 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.860282 | orchestrator | 2026-03-10 00:58:22.860288 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-10 00:58:22.860295 | orchestrator | Tuesday 10 March 2026 00:52:09 +0000 (0:00:00.432) 0:05:40.822 ********* 2026-03-10 00:58:22.860301 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.860307 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.860313 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.860319 | orchestrator | 2026-03-10 00:58:22.860325 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-10 00:58:22.860331 | orchestrator | Tuesday 10 March 2026 00:52:10 +0000 (0:00:00.890) 0:05:41.712 ********* 2026-03-10 00:58:22.860338 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-10 00:58:22.860344 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-10 00:58:22.860350 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-10 00:58:22.860356 | orchestrator | 2026-03-10 00:58:22.860363 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-10 00:58:22.860373 | orchestrator | Tuesday 10 March 2026 00:52:10 +0000 (0:00:00.735) 0:05:42.448 ********* 2026-03-10 00:58:22.860380 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:58:22.860386 | orchestrator | 2026-03-10 00:58:22.860392 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-10 00:58:22.860398 | orchestrator | Tuesday 10 March 2026 00:52:11 +0000 (0:00:00.621) 0:05:43.070 ********* 2026-03-10 00:58:22.860404 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:58:22.860410 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:58:22.860417 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:58:22.860423 | orchestrator | 2026-03-10 00:58:22.860429 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-10 00:58:22.860435 | orchestrator | Tuesday 10 March 2026 00:52:12 +0000 (0:00:00.795) 0:05:43.865 ********* 2026-03-10 00:58:22.860441 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.860447 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.860453 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.860460 | orchestrator | 2026-03-10 00:58:22.860466 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-10 00:58:22.860472 | orchestrator | Tuesday 10 March 2026 00:52:12 +0000 (0:00:00.618) 0:05:44.484 ********* 2026-03-10 00:58:22.860478 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-10 00:58:22.860485 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-10 00:58:22.860491 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-10 00:58:22.860497 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-10 00:58:22.860503 | orchestrator | 2026-03-10 00:58:22.860509 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-10 00:58:22.860546 | orchestrator | Tuesday 10 March 2026 00:52:23 +0000 (0:00:11.054) 0:05:55.538 ********* 2026-03-10 00:58:22.860554 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.860561 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.860567 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.860573 | orchestrator | 2026-03-10 00:58:22.860579 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-10 00:58:22.860585 | orchestrator | Tuesday 10 March 2026 00:52:24 +0000 (0:00:00.433) 0:05:55.971 ********* 2026-03-10 00:58:22.860591 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-10 00:58:22.860598 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-10 00:58:22.860604 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-10 00:58:22.860610 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-10 00:58:22.860616 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 00:58:22.860623 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 00:58:22.860629 | orchestrator | 2026-03-10 00:58:22.860647 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-10 00:58:22.860654 | orchestrator | Tuesday 10 March 2026 00:52:27 +0000 (0:00:03.284) 0:05:59.256 ********* 2026-03-10 00:58:22.860660 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-10 00:58:22.860667 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-10 00:58:22.860673 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-10 00:58:22.860679 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-10 00:58:22.860685 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-10 00:58:22.860691 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-10 00:58:22.860698 | orchestrator | 2026-03-10 00:58:22.860704 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-10 00:58:22.860710 | orchestrator | Tuesday 10 March 2026 00:52:29 +0000 (0:00:01.353) 0:06:00.609 ********* 2026-03-10 00:58:22.860716 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.860722 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.860729 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.860739 | orchestrator | 2026-03-10 00:58:22.860746 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-10 00:58:22.860752 | orchestrator | Tuesday 10 March 2026 00:52:30 +0000 (0:00:01.087) 0:06:01.696 ********* 2026-03-10 00:58:22.860758 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.860764 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.860770 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.860777 | orchestrator | 2026-03-10 00:58:22.860783 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-10 00:58:22.860789 | orchestrator | Tuesday 10 March 2026 00:52:30 +0000 (0:00:00.385) 0:06:02.081 ********* 2026-03-10 00:58:22.860799 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.860805 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.860811 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.860817 | orchestrator | 2026-03-10 00:58:22.860824 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-10 00:58:22.860830 | orchestrator | Tuesday 10 March 2026 00:52:30 +0000 (0:00:00.452) 0:06:02.534 ********* 2026-03-10 00:58:22.860836 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:58:22.860845 | orchestrator | 2026-03-10 00:58:22.860856 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-10 00:58:22.860862 | orchestrator | Tuesday 10 March 2026 00:52:31 +0000 (0:00:00.814) 0:06:03.349 ********* 2026-03-10 00:58:22.860868 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.860875 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.860881 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.860887 | orchestrator | 2026-03-10 00:58:22.860893 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-10 00:58:22.860899 | orchestrator | Tuesday 10 March 2026 00:52:32 +0000 (0:00:00.426) 0:06:03.775 ********* 2026-03-10 00:58:22.860905 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.860911 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.860917 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.860924 | orchestrator | 2026-03-10 00:58:22.860930 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-10 00:58:22.860936 | orchestrator | Tuesday 10 March 2026 00:52:32 +0000 (0:00:00.342) 0:06:04.118 ********* 2026-03-10 00:58:22.860942 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:58:22.860948 | orchestrator | 2026-03-10 00:58:22.860955 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-10 00:58:22.860961 | orchestrator | Tuesday 10 March 2026 00:52:33 +0000 (0:00:00.974) 0:06:05.093 ********* 2026-03-10 00:58:22.860967 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:58:22.860973 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:58:22.860979 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:58:22.860985 | orchestrator | 2026-03-10 00:58:22.860992 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-10 00:58:22.860998 | orchestrator | Tuesday 10 March 2026 00:52:34 +0000 (0:00:01.434) 0:06:06.527 ********* 2026-03-10 00:58:22.861004 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:58:22.861010 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:58:22.861016 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:58:22.861022 | orchestrator | 2026-03-10 00:58:22.861028 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-10 00:58:22.861034 | orchestrator | Tuesday 10 March 2026 00:52:36 +0000 (0:00:01.318) 0:06:07.846 ********* 2026-03-10 00:58:22.861041 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:58:22.861047 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:58:22.861053 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:58:22.861059 | orchestrator | 2026-03-10 00:58:22.861065 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-10 00:58:22.861076 | orchestrator | Tuesday 10 March 2026 00:52:38 +0000 (0:00:01.801) 0:06:09.647 ********* 2026-03-10 00:58:22.861082 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:58:22.861088 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:58:22.861094 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:58:22.861100 | orchestrator | 2026-03-10 00:58:22.861106 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-10 00:58:22.861113 | orchestrator | Tuesday 10 March 2026 00:52:40 +0000 (0:00:02.611) 0:06:12.258 ********* 2026-03-10 00:58:22.861119 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.861125 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.861131 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-10 00:58:22.861137 | orchestrator | 2026-03-10 00:58:22.861143 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-10 00:58:22.861149 | orchestrator | Tuesday 10 March 2026 00:52:41 +0000 (0:00:00.455) 0:06:12.714 ********* 2026-03-10 00:58:22.861155 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-10 00:58:22.861175 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-10 00:58:22.861182 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-03-10 00:58:22.861189 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-03-10 00:58:22.861195 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-03-10 00:58:22.861201 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2026-03-10 00:58:22.861207 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-10 00:58:22.861213 | orchestrator | 2026-03-10 00:58:22.861219 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-10 00:58:22.861226 | orchestrator | Tuesday 10 March 2026 00:53:17 +0000 (0:00:36.737) 0:06:49.452 ********* 2026-03-10 00:58:22.861232 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-10 00:58:22.861238 | orchestrator | 2026-03-10 00:58:22.861244 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-10 00:58:22.861250 | orchestrator | Tuesday 10 March 2026 00:53:19 +0000 (0:00:01.524) 0:06:50.977 ********* 2026-03-10 00:58:22.861257 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.861263 | orchestrator | 2026-03-10 00:58:22.861269 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-10 00:58:22.861279 | orchestrator | Tuesday 10 March 2026 00:53:19 +0000 (0:00:00.381) 0:06:51.358 ********* 2026-03-10 00:58:22.861285 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.861291 | orchestrator | 2026-03-10 00:58:22.861297 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-10 00:58:22.861304 | orchestrator | Tuesday 10 March 2026 00:53:19 +0000 (0:00:00.142) 0:06:51.501 ********* 2026-03-10 00:58:22.861310 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-10 00:58:22.861316 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-10 00:58:22.861322 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-10 00:58:22.861328 | orchestrator | 2026-03-10 00:58:22.861335 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-10 00:58:22.861341 | orchestrator | Tuesday 10 March 2026 00:53:26 +0000 (0:00:06.825) 0:06:58.326 ********* 2026-03-10 00:58:22.861347 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-10 00:58:22.861353 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-10 00:58:22.861359 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-10 00:58:22.861370 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-10 00:58:22.861377 | orchestrator | 2026-03-10 00:58:22.861383 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-10 00:58:22.861389 | orchestrator | Tuesday 10 March 2026 00:53:32 +0000 (0:00:05.676) 0:07:04.003 ********* 2026-03-10 00:58:22.861395 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:58:22.861401 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:58:22.861407 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:58:22.861414 | orchestrator | 2026-03-10 00:58:22.861420 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-10 00:58:22.861426 | orchestrator | Tuesday 10 March 2026 00:53:33 +0000 (0:00:00.804) 0:07:04.807 ********* 2026-03-10 00:58:22.861432 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:58:22.861438 | orchestrator | 2026-03-10 00:58:22.861445 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-10 00:58:22.861451 | orchestrator | Tuesday 10 March 2026 00:53:34 +0000 (0:00:00.924) 0:07:05.732 ********* 2026-03-10 00:58:22.861457 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.861463 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.861469 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.861476 | orchestrator | 2026-03-10 00:58:22.861482 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-10 00:58:22.861488 | orchestrator | Tuesday 10 March 2026 00:53:34 +0000 (0:00:00.412) 0:07:06.145 ********* 2026-03-10 00:58:22.861494 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:58:22.861501 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:58:22.861507 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:58:22.861513 | orchestrator | 2026-03-10 00:58:22.861531 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-10 00:58:22.861537 | orchestrator | Tuesday 10 March 2026 00:53:35 +0000 (0:00:01.271) 0:07:07.417 ********* 2026-03-10 00:58:22.861544 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-10 00:58:22.861550 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-10 00:58:22.861556 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-10 00:58:22.861562 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.861568 | orchestrator | 2026-03-10 00:58:22.861574 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-10 00:58:22.861581 | orchestrator | Tuesday 10 March 2026 00:53:36 +0000 (0:00:00.643) 0:07:08.060 ********* 2026-03-10 00:58:22.861587 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.861593 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.861599 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.861605 | orchestrator | 2026-03-10 00:58:22.861612 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-10 00:58:22.861618 | orchestrator | 2026-03-10 00:58:22.861624 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-10 00:58:22.861630 | orchestrator | Tuesday 10 March 2026 00:53:37 +0000 (0:00:00.978) 0:07:09.038 ********* 2026-03-10 00:58:22.861650 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:58:22.861658 | orchestrator | 2026-03-10 00:58:22.861664 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-10 00:58:22.861670 | orchestrator | Tuesday 10 March 2026 00:53:38 +0000 (0:00:00.597) 0:07:09.635 ********* 2026-03-10 00:58:22.861676 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:58:22.861683 | orchestrator | 2026-03-10 00:58:22.861689 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-10 00:58:22.861695 | orchestrator | Tuesday 10 March 2026 00:53:38 +0000 (0:00:00.775) 0:07:10.410 ********* 2026-03-10 00:58:22.861708 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.861714 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.861720 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.861726 | orchestrator | 2026-03-10 00:58:22.861732 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-10 00:58:22.861738 | orchestrator | Tuesday 10 March 2026 00:53:39 +0000 (0:00:00.347) 0:07:10.758 ********* 2026-03-10 00:58:22.861744 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.861750 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.861757 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.861763 | orchestrator | 2026-03-10 00:58:22.861769 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-10 00:58:22.861775 | orchestrator | Tuesday 10 March 2026 00:53:39 +0000 (0:00:00.739) 0:07:11.497 ********* 2026-03-10 00:58:22.861782 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.861792 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.861798 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.861804 | orchestrator | 2026-03-10 00:58:22.861811 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-10 00:58:22.861817 | orchestrator | Tuesday 10 March 2026 00:53:40 +0000 (0:00:00.836) 0:07:12.334 ********* 2026-03-10 00:58:22.861823 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.861829 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.861835 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.861841 | orchestrator | 2026-03-10 00:58:22.861847 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-10 00:58:22.861853 | orchestrator | Tuesday 10 March 2026 00:53:41 +0000 (0:00:01.123) 0:07:13.458 ********* 2026-03-10 00:58:22.861860 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.861866 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.861872 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.861878 | orchestrator | 2026-03-10 00:58:22.861884 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-10 00:58:22.861890 | orchestrator | Tuesday 10 March 2026 00:53:42 +0000 (0:00:00.375) 0:07:13.833 ********* 2026-03-10 00:58:22.861896 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.861902 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.861909 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.861915 | orchestrator | 2026-03-10 00:58:22.861921 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-10 00:58:22.861927 | orchestrator | Tuesday 10 March 2026 00:53:42 +0000 (0:00:00.354) 0:07:14.187 ********* 2026-03-10 00:58:22.861933 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.861939 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.861945 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.861951 | orchestrator | 2026-03-10 00:58:22.861957 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-10 00:58:22.861964 | orchestrator | Tuesday 10 March 2026 00:53:43 +0000 (0:00:00.433) 0:07:14.621 ********* 2026-03-10 00:58:22.861970 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.861976 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.861982 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.861988 | orchestrator | 2026-03-10 00:58:22.861994 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-10 00:58:22.862000 | orchestrator | Tuesday 10 March 2026 00:53:44 +0000 (0:00:01.056) 0:07:15.677 ********* 2026-03-10 00:58:22.862006 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.862047 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.862055 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.862061 | orchestrator | 2026-03-10 00:58:22.862067 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-10 00:58:22.862074 | orchestrator | Tuesday 10 March 2026 00:53:44 +0000 (0:00:00.841) 0:07:16.518 ********* 2026-03-10 00:58:22.862080 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.862086 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.862097 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.862103 | orchestrator | 2026-03-10 00:58:22.862110 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-10 00:58:22.862116 | orchestrator | Tuesday 10 March 2026 00:53:45 +0000 (0:00:00.350) 0:07:16.869 ********* 2026-03-10 00:58:22.862123 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.862129 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.862135 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.862141 | orchestrator | 2026-03-10 00:58:22.862147 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-10 00:58:22.862153 | orchestrator | Tuesday 10 March 2026 00:53:45 +0000 (0:00:00.371) 0:07:17.241 ********* 2026-03-10 00:58:22.862159 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.862166 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.862172 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.862178 | orchestrator | 2026-03-10 00:58:22.862184 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-10 00:58:22.862190 | orchestrator | Tuesday 10 March 2026 00:53:46 +0000 (0:00:00.658) 0:07:17.900 ********* 2026-03-10 00:58:22.862197 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.862203 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.862209 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.862215 | orchestrator | 2026-03-10 00:58:22.862221 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-10 00:58:22.862227 | orchestrator | Tuesday 10 March 2026 00:53:46 +0000 (0:00:00.368) 0:07:18.269 ********* 2026-03-10 00:58:22.862233 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.862239 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.862250 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.862256 | orchestrator | 2026-03-10 00:58:22.862262 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-10 00:58:22.862269 | orchestrator | Tuesday 10 March 2026 00:53:47 +0000 (0:00:00.350) 0:07:18.620 ********* 2026-03-10 00:58:22.862275 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.862281 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.862287 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.862293 | orchestrator | 2026-03-10 00:58:22.862300 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-10 00:58:22.862306 | orchestrator | Tuesday 10 March 2026 00:53:47 +0000 (0:00:00.361) 0:07:18.981 ********* 2026-03-10 00:58:22.862312 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.862318 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.862325 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.862331 | orchestrator | 2026-03-10 00:58:22.862337 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-10 00:58:22.862343 | orchestrator | Tuesday 10 March 2026 00:53:48 +0000 (0:00:00.639) 0:07:19.621 ********* 2026-03-10 00:58:22.862349 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.862356 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.862362 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.862368 | orchestrator | 2026-03-10 00:58:22.862374 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-10 00:58:22.862381 | orchestrator | Tuesday 10 March 2026 00:53:48 +0000 (0:00:00.362) 0:07:19.983 ********* 2026-03-10 00:58:22.862387 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.862393 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.862399 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.862405 | orchestrator | 2026-03-10 00:58:22.862415 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-10 00:58:22.862422 | orchestrator | Tuesday 10 March 2026 00:53:48 +0000 (0:00:00.383) 0:07:20.366 ********* 2026-03-10 00:58:22.862428 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.862434 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.862440 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.862446 | orchestrator | 2026-03-10 00:58:22.862457 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-10 00:58:22.862464 | orchestrator | Tuesday 10 March 2026 00:53:49 +0000 (0:00:00.832) 0:07:21.199 ********* 2026-03-10 00:58:22.862470 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.862476 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.862482 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.862488 | orchestrator | 2026-03-10 00:58:22.862495 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-10 00:58:22.862501 | orchestrator | Tuesday 10 March 2026 00:53:50 +0000 (0:00:00.359) 0:07:21.558 ********* 2026-03-10 00:58:22.862507 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-10 00:58:22.862513 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-10 00:58:22.862536 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-10 00:58:22.862543 | orchestrator | 2026-03-10 00:58:22.862549 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-10 00:58:22.862555 | orchestrator | Tuesday 10 March 2026 00:53:50 +0000 (0:00:00.656) 0:07:22.215 ********* 2026-03-10 00:58:22.862562 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:58:22.862568 | orchestrator | 2026-03-10 00:58:22.862574 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-10 00:58:22.862580 | orchestrator | Tuesday 10 March 2026 00:53:51 +0000 (0:00:00.572) 0:07:22.788 ********* 2026-03-10 00:58:22.862586 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.862593 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.862599 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.862605 | orchestrator | 2026-03-10 00:58:22.862611 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-10 00:58:22.862617 | orchestrator | Tuesday 10 March 2026 00:53:51 +0000 (0:00:00.609) 0:07:23.397 ********* 2026-03-10 00:58:22.862623 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.862630 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.862636 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.862642 | orchestrator | 2026-03-10 00:58:22.862648 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-10 00:58:22.862655 | orchestrator | Tuesday 10 March 2026 00:53:52 +0000 (0:00:00.318) 0:07:23.716 ********* 2026-03-10 00:58:22.862661 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.862667 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.862673 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.862679 | orchestrator | 2026-03-10 00:58:22.862686 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-10 00:58:22.862692 | orchestrator | Tuesday 10 March 2026 00:53:52 +0000 (0:00:00.587) 0:07:24.303 ********* 2026-03-10 00:58:22.862698 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.862704 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.862710 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.862716 | orchestrator | 2026-03-10 00:58:22.862723 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-10 00:58:22.862729 | orchestrator | Tuesday 10 March 2026 00:53:53 +0000 (0:00:00.369) 0:07:24.673 ********* 2026-03-10 00:58:22.862735 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-10 00:58:22.862741 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-10 00:58:22.862748 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-10 00:58:22.862754 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-10 00:58:22.862760 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-10 00:58:22.862784 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-10 00:58:22.862790 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-10 00:58:22.862799 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-10 00:58:22.862809 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-10 00:58:22.862820 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-10 00:58:22.862831 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-10 00:58:22.862841 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-10 00:58:22.862850 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-10 00:58:22.862860 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-10 00:58:22.862869 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-10 00:58:22.862881 | orchestrator | 2026-03-10 00:58:22.862891 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-10 00:58:22.862902 | orchestrator | Tuesday 10 March 2026 00:53:55 +0000 (0:00:02.185) 0:07:26.859 ********* 2026-03-10 00:58:22.862913 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.862925 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.862931 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.862937 | orchestrator | 2026-03-10 00:58:22.862944 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-10 00:58:22.862950 | orchestrator | Tuesday 10 March 2026 00:53:55 +0000 (0:00:00.304) 0:07:27.163 ********* 2026-03-10 00:58:22.862956 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:58:22.862962 | orchestrator | 2026-03-10 00:58:22.862968 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-10 00:58:22.862974 | orchestrator | Tuesday 10 March 2026 00:53:56 +0000 (0:00:00.510) 0:07:27.674 ********* 2026-03-10 00:58:22.862980 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-10 00:58:22.862986 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-10 00:58:22.862992 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-10 00:58:22.862999 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-10 00:58:22.863005 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-10 00:58:22.863011 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-10 00:58:22.863017 | orchestrator | 2026-03-10 00:58:22.863023 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-10 00:58:22.863029 | orchestrator | Tuesday 10 March 2026 00:53:57 +0000 (0:00:01.146) 0:07:28.821 ********* 2026-03-10 00:58:22.863035 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 00:58:22.863042 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-10 00:58:22.863048 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-10 00:58:22.863054 | orchestrator | 2026-03-10 00:58:22.863060 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-10 00:58:22.863066 | orchestrator | Tuesday 10 March 2026 00:53:59 +0000 (0:00:02.042) 0:07:30.863 ********* 2026-03-10 00:58:22.863072 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-10 00:58:22.863078 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-10 00:58:22.863084 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:58:22.863091 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-10 00:58:22.863097 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-10 00:58:22.863103 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:58:22.863125 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-10 00:58:22.863131 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-10 00:58:22.863137 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:58:22.863143 | orchestrator | 2026-03-10 00:58:22.863149 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-10 00:58:22.863155 | orchestrator | Tuesday 10 March 2026 00:54:00 +0000 (0:00:01.120) 0:07:31.984 ********* 2026-03-10 00:58:22.863161 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-10 00:58:22.863168 | orchestrator | 2026-03-10 00:58:22.863174 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-10 00:58:22.863180 | orchestrator | Tuesday 10 March 2026 00:54:02 +0000 (0:00:02.371) 0:07:34.356 ********* 2026-03-10 00:58:22.863186 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:58:22.863192 | orchestrator | 2026-03-10 00:58:22.863198 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-10 00:58:22.863204 | orchestrator | Tuesday 10 March 2026 00:54:03 +0000 (0:00:00.532) 0:07:34.888 ********* 2026-03-10 00:58:22.863211 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d', 'data_vg': 'ceph-ba4e8e90-9c8a-5143-9418-e7ec5f1bd32d'}) 2026-03-10 00:58:22.863218 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-120d91ae-c06d-5ca9-b450-85f2d491e96a', 'data_vg': 'ceph-120d91ae-c06d-5ca9-b450-85f2d491e96a'}) 2026-03-10 00:58:22.863229 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c0742eba-6300-5cfa-b498-a3704e14c384', 'data_vg': 'ceph-c0742eba-6300-5cfa-b498-a3704e14c384'}) 2026-03-10 00:58:22.863235 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e8bae358-0d63-5788-ab6b-8bf409d6bda1', 'data_vg': 'ceph-e8bae358-0d63-5788-ab6b-8bf409d6bda1'}) 2026-03-10 00:58:22.863242 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-07a8a029-b5c8-5530-8cc4-5b47064bbf55', 'data_vg': 'ceph-07a8a029-b5c8-5530-8cc4-5b47064bbf55'}) 2026-03-10 00:58:22.863248 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2', 'data_vg': 'ceph-45abfd4e-fefd-5ba8-aea8-e55d74ffeda2'}) 2026-03-10 00:58:22.863254 | orchestrator | 2026-03-10 00:58:22.863260 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-10 00:58:22.863267 | orchestrator | Tuesday 10 March 2026 00:54:47 +0000 (0:00:44.382) 0:08:19.271 ********* 2026-03-10 00:58:22.863273 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.863279 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.863285 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.863291 | orchestrator | 2026-03-10 00:58:22.863298 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-10 00:58:22.863304 | orchestrator | Tuesday 10 March 2026 00:54:48 +0000 (0:00:00.345) 0:08:19.617 ********* 2026-03-10 00:58:22.863310 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:58:22.863316 | orchestrator | 2026-03-10 00:58:22.863325 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-10 00:58:22.863332 | orchestrator | Tuesday 10 March 2026 00:54:48 +0000 (0:00:00.501) 0:08:20.118 ********* 2026-03-10 00:58:22.863338 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.863344 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.863351 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.863357 | orchestrator | 2026-03-10 00:58:22.863363 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-10 00:58:22.863369 | orchestrator | Tuesday 10 March 2026 00:54:49 +0000 (0:00:00.994) 0:08:21.113 ********* 2026-03-10 00:58:22.863375 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.863381 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.863388 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.863400 | orchestrator | 2026-03-10 00:58:22.863406 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-10 00:58:22.863413 | orchestrator | Tuesday 10 March 2026 00:54:52 +0000 (0:00:02.830) 0:08:23.943 ********* 2026-03-10 00:58:22.863420 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:58:22.863426 | orchestrator | 2026-03-10 00:58:22.863433 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-10 00:58:22.863440 | orchestrator | Tuesday 10 March 2026 00:54:53 +0000 (0:00:00.620) 0:08:24.563 ********* 2026-03-10 00:58:22.863446 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:58:22.863453 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:58:22.863460 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:58:22.863467 | orchestrator | 2026-03-10 00:58:22.863473 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-10 00:58:22.863480 | orchestrator | Tuesday 10 March 2026 00:54:54 +0000 (0:00:01.626) 0:08:26.189 ********* 2026-03-10 00:58:22.863487 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:58:22.863493 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:58:22.863500 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:58:22.863507 | orchestrator | 2026-03-10 00:58:22.863513 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-10 00:58:22.863534 | orchestrator | Tuesday 10 March 2026 00:54:55 +0000 (0:00:01.175) 0:08:27.365 ********* 2026-03-10 00:58:22.863541 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:58:22.863548 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:58:22.863554 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:58:22.863561 | orchestrator | 2026-03-10 00:58:22.863568 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-10 00:58:22.863575 | orchestrator | Tuesday 10 March 2026 00:54:57 +0000 (0:00:01.912) 0:08:29.278 ********* 2026-03-10 00:58:22.863581 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.863588 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.863595 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.863601 | orchestrator | 2026-03-10 00:58:22.863608 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-10 00:58:22.863615 | orchestrator | Tuesday 10 March 2026 00:54:58 +0000 (0:00:00.363) 0:08:29.642 ********* 2026-03-10 00:58:22.863621 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.863628 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.863635 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.863641 | orchestrator | 2026-03-10 00:58:22.863648 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-10 00:58:22.863655 | orchestrator | Tuesday 10 March 2026 00:54:58 +0000 (0:00:00.679) 0:08:30.321 ********* 2026-03-10 00:58:22.863661 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-03-10 00:58:22.863668 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-03-10 00:58:22.863675 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-03-10 00:58:22.863681 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-10 00:58:22.863688 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-03-10 00:58:22.863695 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-03-10 00:58:22.863701 | orchestrator | 2026-03-10 00:58:22.863708 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-10 00:58:22.863715 | orchestrator | Tuesday 10 March 2026 00:54:59 +0000 (0:00:01.071) 0:08:31.393 ********* 2026-03-10 00:58:22.863721 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-03-10 00:58:22.863728 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-03-10 00:58:22.863735 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-03-10 00:58:22.863741 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-10 00:58:22.863748 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-03-10 00:58:22.863759 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-03-10 00:58:22.863766 | orchestrator | 2026-03-10 00:58:22.863773 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-10 00:58:22.863784 | orchestrator | Tuesday 10 March 2026 00:55:02 +0000 (0:00:02.237) 0:08:33.630 ********* 2026-03-10 00:58:22.863791 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-03-10 00:58:22.863797 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-03-10 00:58:22.863804 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-03-10 00:58:22.863811 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-03-10 00:58:22.863817 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-03-10 00:58:22.863824 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-10 00:58:22.863831 | orchestrator | 2026-03-10 00:58:22.863837 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-10 00:58:22.863844 | orchestrator | Tuesday 10 March 2026 00:55:06 +0000 (0:00:03.945) 0:08:37.575 ********* 2026-03-10 00:58:22.863851 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.863858 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.863864 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-10 00:58:22.863871 | orchestrator | 2026-03-10 00:58:22.863878 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-10 00:58:22.863884 | orchestrator | Tuesday 10 March 2026 00:55:09 +0000 (0:00:03.123) 0:08:40.699 ********* 2026-03-10 00:58:22.863891 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.863897 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.863908 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-10 00:58:22.863915 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-10 00:58:22.863922 | orchestrator | 2026-03-10 00:58:22.863929 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-10 00:58:22.863935 | orchestrator | Tuesday 10 March 2026 00:55:21 +0000 (0:00:12.572) 0:08:53.271 ********* 2026-03-10 00:58:22.863942 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.863949 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.863955 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.863962 | orchestrator | 2026-03-10 00:58:22.863969 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-10 00:58:22.863975 | orchestrator | Tuesday 10 March 2026 00:55:22 +0000 (0:00:01.177) 0:08:54.448 ********* 2026-03-10 00:58:22.863982 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.863989 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.863995 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.864002 | orchestrator | 2026-03-10 00:58:22.864009 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-10 00:58:22.864015 | orchestrator | Tuesday 10 March 2026 00:55:23 +0000 (0:00:00.375) 0:08:54.824 ********* 2026-03-10 00:58:22.864022 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:58:22.864029 | orchestrator | 2026-03-10 00:58:22.864036 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-10 00:58:22.864042 | orchestrator | Tuesday 10 March 2026 00:55:23 +0000 (0:00:00.517) 0:08:55.342 ********* 2026-03-10 00:58:22.864049 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-10 00:58:22.864056 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-10 00:58:22.864063 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-10 00:58:22.864069 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.864076 | orchestrator | 2026-03-10 00:58:22.864082 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-10 00:58:22.864089 | orchestrator | Tuesday 10 March 2026 00:55:24 +0000 (0:00:01.005) 0:08:56.348 ********* 2026-03-10 00:58:22.864096 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.864102 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.864109 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.864120 | orchestrator | 2026-03-10 00:58:22.864126 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-10 00:58:22.864133 | orchestrator | Tuesday 10 March 2026 00:55:25 +0000 (0:00:00.347) 0:08:56.695 ********* 2026-03-10 00:58:22.864140 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.864146 | orchestrator | 2026-03-10 00:58:22.864153 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-10 00:58:22.864160 | orchestrator | Tuesday 10 March 2026 00:55:25 +0000 (0:00:00.266) 0:08:56.962 ********* 2026-03-10 00:58:22.864166 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.864173 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.864180 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.864186 | orchestrator | 2026-03-10 00:58:22.864193 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-10 00:58:22.864200 | orchestrator | Tuesday 10 March 2026 00:55:25 +0000 (0:00:00.381) 0:08:57.343 ********* 2026-03-10 00:58:22.864206 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.864213 | orchestrator | 2026-03-10 00:58:22.864220 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-10 00:58:22.864226 | orchestrator | Tuesday 10 March 2026 00:55:25 +0000 (0:00:00.206) 0:08:57.550 ********* 2026-03-10 00:58:22.864233 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.864240 | orchestrator | 2026-03-10 00:58:22.864246 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-10 00:58:22.864253 | orchestrator | Tuesday 10 March 2026 00:55:26 +0000 (0:00:00.272) 0:08:57.822 ********* 2026-03-10 00:58:22.864259 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.864266 | orchestrator | 2026-03-10 00:58:22.864273 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-10 00:58:22.864279 | orchestrator | Tuesday 10 March 2026 00:55:26 +0000 (0:00:00.129) 0:08:57.951 ********* 2026-03-10 00:58:22.864286 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.864292 | orchestrator | 2026-03-10 00:58:22.864303 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-10 00:58:22.864310 | orchestrator | Tuesday 10 March 2026 00:55:26 +0000 (0:00:00.238) 0:08:58.189 ********* 2026-03-10 00:58:22.864317 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.864324 | orchestrator | 2026-03-10 00:58:22.864330 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-10 00:58:22.864337 | orchestrator | Tuesday 10 March 2026 00:55:27 +0000 (0:00:00.920) 0:08:59.110 ********* 2026-03-10 00:58:22.864344 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-10 00:58:22.864350 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-10 00:58:22.864357 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-10 00:58:22.864364 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.864370 | orchestrator | 2026-03-10 00:58:22.864377 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-10 00:58:22.864384 | orchestrator | Tuesday 10 March 2026 00:55:28 +0000 (0:00:00.488) 0:08:59.599 ********* 2026-03-10 00:58:22.864390 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.864397 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.864404 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.864410 | orchestrator | 2026-03-10 00:58:22.864417 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-10 00:58:22.864424 | orchestrator | Tuesday 10 March 2026 00:55:28 +0000 (0:00:00.400) 0:08:59.999 ********* 2026-03-10 00:58:22.864430 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.864437 | orchestrator | 2026-03-10 00:58:22.864444 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-10 00:58:22.864454 | orchestrator | Tuesday 10 March 2026 00:55:28 +0000 (0:00:00.253) 0:09:00.252 ********* 2026-03-10 00:58:22.864461 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.864468 | orchestrator | 2026-03-10 00:58:22.864479 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-10 00:58:22.864485 | orchestrator | 2026-03-10 00:58:22.864492 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-10 00:58:22.864498 | orchestrator | Tuesday 10 March 2026 00:55:29 +0000 (0:00:00.963) 0:09:01.216 ********* 2026-03-10 00:58:22.864505 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:58:22.864514 | orchestrator | 2026-03-10 00:58:22.864532 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-10 00:58:22.864539 | orchestrator | Tuesday 10 March 2026 00:55:30 +0000 (0:00:01.315) 0:09:02.531 ********* 2026-03-10 00:58:22.864546 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:58:22.864553 | orchestrator | 2026-03-10 00:58:22.864560 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-10 00:58:22.864566 | orchestrator | Tuesday 10 March 2026 00:55:32 +0000 (0:00:01.302) 0:09:03.834 ********* 2026-03-10 00:58:22.864576 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.864588 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.864595 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.864601 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.864608 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.864615 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.864621 | orchestrator | 2026-03-10 00:58:22.864628 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-10 00:58:22.864634 | orchestrator | Tuesday 10 March 2026 00:55:33 +0000 (0:00:01.312) 0:09:05.147 ********* 2026-03-10 00:58:22.864641 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.864648 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.864654 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.864661 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.864668 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.864674 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.864681 | orchestrator | 2026-03-10 00:58:22.864688 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-10 00:58:22.864694 | orchestrator | Tuesday 10 March 2026 00:55:34 +0000 (0:00:00.744) 0:09:05.892 ********* 2026-03-10 00:58:22.864701 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.864708 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.864714 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.864721 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.864727 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.864734 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.864740 | orchestrator | 2026-03-10 00:58:22.864747 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-10 00:58:22.864754 | orchestrator | Tuesday 10 March 2026 00:55:35 +0000 (0:00:01.070) 0:09:06.962 ********* 2026-03-10 00:58:22.864760 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.864767 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.864773 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.864780 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.864787 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.864793 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.864800 | orchestrator | 2026-03-10 00:58:22.864807 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-10 00:58:22.864813 | orchestrator | Tuesday 10 March 2026 00:55:36 +0000 (0:00:00.725) 0:09:07.687 ********* 2026-03-10 00:58:22.864820 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.864827 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.864833 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.864840 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.864847 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.864859 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.864866 | orchestrator | 2026-03-10 00:58:22.864873 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-10 00:58:22.864879 | orchestrator | Tuesday 10 March 2026 00:55:37 +0000 (0:00:01.331) 0:09:09.019 ********* 2026-03-10 00:58:22.864886 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.864893 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.864903 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.864910 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.864917 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.864924 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.864930 | orchestrator | 2026-03-10 00:58:22.864937 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-10 00:58:22.864944 | orchestrator | Tuesday 10 March 2026 00:55:38 +0000 (0:00:00.626) 0:09:09.645 ********* 2026-03-10 00:58:22.864950 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.864957 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.864964 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.864970 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.864977 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.864983 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.864990 | orchestrator | 2026-03-10 00:58:22.864997 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-10 00:58:22.865003 | orchestrator | Tuesday 10 March 2026 00:55:39 +0000 (0:00:00.908) 0:09:10.554 ********* 2026-03-10 00:58:22.865010 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.865017 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.865024 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.865030 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.865037 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.865043 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.865050 | orchestrator | 2026-03-10 00:58:22.865057 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-10 00:58:22.865063 | orchestrator | Tuesday 10 March 2026 00:55:40 +0000 (0:00:01.084) 0:09:11.638 ********* 2026-03-10 00:58:22.865070 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.865076 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.865087 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.865093 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.865100 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.865107 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.865114 | orchestrator | 2026-03-10 00:58:22.865120 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-10 00:58:22.865127 | orchestrator | Tuesday 10 March 2026 00:55:41 +0000 (0:00:01.433) 0:09:13.072 ********* 2026-03-10 00:58:22.865133 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.865140 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.865147 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.865154 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.865160 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.865167 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.865173 | orchestrator | 2026-03-10 00:58:22.865180 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-10 00:58:22.865187 | orchestrator | Tuesday 10 March 2026 00:55:42 +0000 (0:00:00.616) 0:09:13.689 ********* 2026-03-10 00:58:22.865193 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.865200 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.865206 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.865213 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.865220 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.865226 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.865233 | orchestrator | 2026-03-10 00:58:22.865240 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-10 00:58:22.865246 | orchestrator | Tuesday 10 March 2026 00:55:43 +0000 (0:00:00.928) 0:09:14.618 ********* 2026-03-10 00:58:22.865257 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.865264 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.865271 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.865277 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.865284 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.865290 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.865297 | orchestrator | 2026-03-10 00:58:22.865304 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-10 00:58:22.865311 | orchestrator | Tuesday 10 March 2026 00:55:43 +0000 (0:00:00.706) 0:09:15.325 ********* 2026-03-10 00:58:22.865318 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.865324 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.865331 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.865337 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.865344 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.865350 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.865357 | orchestrator | 2026-03-10 00:58:22.865364 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-10 00:58:22.865371 | orchestrator | Tuesday 10 March 2026 00:55:44 +0000 (0:00:00.937) 0:09:16.263 ********* 2026-03-10 00:58:22.865377 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.865384 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.865391 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.865397 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.865404 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.865411 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.865417 | orchestrator | 2026-03-10 00:58:22.865424 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-10 00:58:22.865431 | orchestrator | Tuesday 10 March 2026 00:55:45 +0000 (0:00:00.660) 0:09:16.923 ********* 2026-03-10 00:58:22.865437 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.865444 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.865450 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.865457 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.865463 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.865470 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.865477 | orchestrator | 2026-03-10 00:58:22.865483 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-10 00:58:22.865490 | orchestrator | Tuesday 10 March 2026 00:55:46 +0000 (0:00:00.884) 0:09:17.808 ********* 2026-03-10 00:58:22.865497 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.865503 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.865510 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.865561 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:58:22.865575 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:58:22.865587 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:58:22.865599 | orchestrator | 2026-03-10 00:58:22.865609 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-10 00:58:22.865622 | orchestrator | Tuesday 10 March 2026 00:55:46 +0000 (0:00:00.620) 0:09:18.428 ********* 2026-03-10 00:58:22.865629 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.865641 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.865648 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.865655 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.865662 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.865668 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.865675 | orchestrator | 2026-03-10 00:58:22.865682 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-10 00:58:22.865688 | orchestrator | Tuesday 10 March 2026 00:55:47 +0000 (0:00:01.001) 0:09:19.429 ********* 2026-03-10 00:58:22.865695 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.865702 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.865708 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.865721 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.865727 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.865733 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.865739 | orchestrator | 2026-03-10 00:58:22.865745 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-10 00:58:22.865751 | orchestrator | Tuesday 10 March 2026 00:55:48 +0000 (0:00:00.730) 0:09:20.160 ********* 2026-03-10 00:58:22.865756 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.865762 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.865768 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.865773 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.865779 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.865785 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.865791 | orchestrator | 2026-03-10 00:58:22.865796 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-10 00:58:22.865802 | orchestrator | Tuesday 10 March 2026 00:55:50 +0000 (0:00:01.397) 0:09:21.557 ********* 2026-03-10 00:58:22.865808 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-10 00:58:22.865814 | orchestrator | 2026-03-10 00:58:22.865824 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-10 00:58:22.865830 | orchestrator | Tuesday 10 March 2026 00:55:54 +0000 (0:00:04.245) 0:09:25.803 ********* 2026-03-10 00:58:22.865835 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-10 00:58:22.865841 | orchestrator | 2026-03-10 00:58:22.865847 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-10 00:58:22.865853 | orchestrator | Tuesday 10 March 2026 00:55:56 +0000 (0:00:02.237) 0:09:28.040 ********* 2026-03-10 00:58:22.865858 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:58:22.865864 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:58:22.865870 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:58:22.865876 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.865881 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:58:22.865887 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:58:22.865893 | orchestrator | 2026-03-10 00:58:22.865898 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-10 00:58:22.865904 | orchestrator | Tuesday 10 March 2026 00:55:58 +0000 (0:00:02.114) 0:09:30.155 ********* 2026-03-10 00:58:22.865910 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:58:22.865916 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:58:22.865921 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:58:22.865927 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:58:22.865933 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:58:22.865938 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:58:22.865944 | orchestrator | 2026-03-10 00:58:22.865950 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-10 00:58:22.865955 | orchestrator | Tuesday 10 March 2026 00:55:59 +0000 (0:00:01.081) 0:09:31.237 ********* 2026-03-10 00:58:22.865962 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:58:22.865968 | orchestrator | 2026-03-10 00:58:22.865975 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-10 00:58:22.865981 | orchestrator | Tuesday 10 March 2026 00:56:01 +0000 (0:00:01.351) 0:09:32.588 ********* 2026-03-10 00:58:22.865986 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:58:22.865992 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:58:22.865998 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:58:22.866004 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:58:22.866009 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:58:22.866052 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:58:22.866059 | orchestrator | 2026-03-10 00:58:22.866064 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-10 00:58:22.866070 | orchestrator | Tuesday 10 March 2026 00:56:02 +0000 (0:00:01.811) 0:09:34.399 ********* 2026-03-10 00:58:22.866083 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:58:22.866089 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:58:22.866094 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:58:22.866100 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:58:22.866106 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:58:22.866112 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:58:22.866117 | orchestrator | 2026-03-10 00:58:22.866123 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-10 00:58:22.866129 | orchestrator | Tuesday 10 March 2026 00:56:06 +0000 (0:00:03.429) 0:09:37.829 ********* 2026-03-10 00:58:22.866135 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:58:22.866141 | orchestrator | 2026-03-10 00:58:22.866147 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-10 00:58:22.866152 | orchestrator | Tuesday 10 March 2026 00:56:07 +0000 (0:00:01.395) 0:09:39.225 ********* 2026-03-10 00:58:22.866158 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.866164 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.866170 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.866175 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.866181 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.866187 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.866192 | orchestrator | 2026-03-10 00:58:22.866198 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-10 00:58:22.866204 | orchestrator | Tuesday 10 March 2026 00:56:08 +0000 (0:00:00.941) 0:09:40.166 ********* 2026-03-10 00:58:22.866210 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:58:22.866220 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:58:22.866226 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:58:22.866231 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:58:22.866237 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:58:22.866243 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:58:22.866249 | orchestrator | 2026-03-10 00:58:22.866255 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-10 00:58:22.866260 | orchestrator | Tuesday 10 March 2026 00:56:10 +0000 (0:00:02.153) 0:09:42.319 ********* 2026-03-10 00:58:22.866266 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.866272 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.866278 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.866283 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:58:22.866289 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:58:22.866294 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:58:22.866300 | orchestrator | 2026-03-10 00:58:22.866306 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-10 00:58:22.866312 | orchestrator | 2026-03-10 00:58:22.866317 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-10 00:58:22.866323 | orchestrator | Tuesday 10 March 2026 00:56:12 +0000 (0:00:01.358) 0:09:43.677 ********* 2026-03-10 00:58:22.866329 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4, testbed-node-3, testbed-node-5 2026-03-10 00:58:22.866335 | orchestrator | 2026-03-10 00:58:22.866341 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-10 00:58:22.866346 | orchestrator | Tuesday 10 March 2026 00:56:12 +0000 (0:00:00.566) 0:09:44.244 ********* 2026-03-10 00:58:22.866356 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:58:22.866362 | orchestrator | 2026-03-10 00:58:22.866368 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-10 00:58:22.866373 | orchestrator | Tuesday 10 March 2026 00:56:13 +0000 (0:00:00.855) 0:09:45.100 ********* 2026-03-10 00:58:22.866379 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.866385 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.866397 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.866403 | orchestrator | 2026-03-10 00:58:22.866409 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-10 00:58:22.866414 | orchestrator | Tuesday 10 March 2026 00:56:13 +0000 (0:00:00.318) 0:09:45.419 ********* 2026-03-10 00:58:22.866420 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.866426 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.866432 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.866437 | orchestrator | 2026-03-10 00:58:22.866443 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-10 00:58:22.866449 | orchestrator | Tuesday 10 March 2026 00:56:14 +0000 (0:00:00.710) 0:09:46.129 ********* 2026-03-10 00:58:22.866455 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.866461 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.866466 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.866472 | orchestrator | 2026-03-10 00:58:22.866478 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-10 00:58:22.866484 | orchestrator | Tuesday 10 March 2026 00:56:15 +0000 (0:00:01.072) 0:09:47.202 ********* 2026-03-10 00:58:22.866489 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.866495 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.866501 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.866507 | orchestrator | 2026-03-10 00:58:22.866512 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-10 00:58:22.866535 | orchestrator | Tuesday 10 March 2026 00:56:16 +0000 (0:00:00.833) 0:09:48.036 ********* 2026-03-10 00:58:22.866541 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.866547 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.866552 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.866558 | orchestrator | 2026-03-10 00:58:22.866564 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-10 00:58:22.866570 | orchestrator | Tuesday 10 March 2026 00:56:16 +0000 (0:00:00.360) 0:09:48.396 ********* 2026-03-10 00:58:22.866576 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.866581 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.866588 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.866593 | orchestrator | 2026-03-10 00:58:22.866599 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-10 00:58:22.866605 | orchestrator | Tuesday 10 March 2026 00:56:17 +0000 (0:00:00.327) 0:09:48.724 ********* 2026-03-10 00:58:22.866611 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.866616 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.866622 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.866628 | orchestrator | 2026-03-10 00:58:22.866634 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-10 00:58:22.866639 | orchestrator | Tuesday 10 March 2026 00:56:17 +0000 (0:00:00.624) 0:09:49.349 ********* 2026-03-10 00:58:22.866645 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.866651 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.866657 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.866662 | orchestrator | 2026-03-10 00:58:22.866668 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-10 00:58:22.866674 | orchestrator | Tuesday 10 March 2026 00:56:18 +0000 (0:00:00.762) 0:09:50.111 ********* 2026-03-10 00:58:22.866680 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.866686 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.866692 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.866698 | orchestrator | 2026-03-10 00:58:22.866703 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-10 00:58:22.866709 | orchestrator | Tuesday 10 March 2026 00:56:19 +0000 (0:00:00.745) 0:09:50.857 ********* 2026-03-10 00:58:22.866715 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.866721 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.866727 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.866732 | orchestrator | 2026-03-10 00:58:22.866738 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-10 00:58:22.866749 | orchestrator | Tuesday 10 March 2026 00:56:19 +0000 (0:00:00.322) 0:09:51.179 ********* 2026-03-10 00:58:22.866755 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.866765 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.866771 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.866776 | orchestrator | 2026-03-10 00:58:22.866782 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-10 00:58:22.866788 | orchestrator | Tuesday 10 March 2026 00:56:20 +0000 (0:00:00.598) 0:09:51.778 ********* 2026-03-10 00:58:22.866794 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.866800 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.866805 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.866811 | orchestrator | 2026-03-10 00:58:22.866817 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-10 00:58:22.866823 | orchestrator | Tuesday 10 March 2026 00:56:20 +0000 (0:00:00.333) 0:09:52.111 ********* 2026-03-10 00:58:22.866829 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.866835 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.866840 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.866846 | orchestrator | 2026-03-10 00:58:22.866852 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-10 00:58:22.866858 | orchestrator | Tuesday 10 March 2026 00:56:20 +0000 (0:00:00.358) 0:09:52.469 ********* 2026-03-10 00:58:22.866864 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.866869 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.866875 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.866881 | orchestrator | 2026-03-10 00:58:22.866886 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-10 00:58:22.866892 | orchestrator | Tuesday 10 March 2026 00:56:21 +0000 (0:00:00.356) 0:09:52.826 ********* 2026-03-10 00:58:22.866898 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.866904 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.866913 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.866919 | orchestrator | 2026-03-10 00:58:22.866925 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-10 00:58:22.866931 | orchestrator | Tuesday 10 March 2026 00:56:21 +0000 (0:00:00.701) 0:09:53.528 ********* 2026-03-10 00:58:22.866937 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.866943 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.866948 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.866954 | orchestrator | 2026-03-10 00:58:22.866960 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-10 00:58:22.866966 | orchestrator | Tuesday 10 March 2026 00:56:22 +0000 (0:00:00.358) 0:09:53.886 ********* 2026-03-10 00:58:22.866971 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.866977 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.866983 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.866989 | orchestrator | 2026-03-10 00:58:22.866994 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-10 00:58:22.867000 | orchestrator | Tuesday 10 March 2026 00:56:22 +0000 (0:00:00.322) 0:09:54.209 ********* 2026-03-10 00:58:22.867006 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.867012 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.867017 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.867023 | orchestrator | 2026-03-10 00:58:22.867029 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-10 00:58:22.867035 | orchestrator | Tuesday 10 March 2026 00:56:22 +0000 (0:00:00.328) 0:09:54.537 ********* 2026-03-10 00:58:22.867040 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.867046 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.867052 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.867058 | orchestrator | 2026-03-10 00:58:22.867063 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-10 00:58:22.867069 | orchestrator | Tuesday 10 March 2026 00:56:23 +0000 (0:00:00.892) 0:09:55.429 ********* 2026-03-10 00:58:22.867080 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.867086 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.867092 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-10 00:58:22.867098 | orchestrator | 2026-03-10 00:58:22.867103 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-10 00:58:22.867109 | orchestrator | Tuesday 10 March 2026 00:56:24 +0000 (0:00:00.439) 0:09:55.869 ********* 2026-03-10 00:58:22.867115 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-10 00:58:22.867121 | orchestrator | 2026-03-10 00:58:22.867127 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-10 00:58:22.867132 | orchestrator | Tuesday 10 March 2026 00:56:26 +0000 (0:00:02.235) 0:09:58.104 ********* 2026-03-10 00:58:22.867140 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-10 00:58:22.867147 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.867153 | orchestrator | 2026-03-10 00:58:22.867159 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-10 00:58:22.867165 | orchestrator | Tuesday 10 March 2026 00:56:26 +0000 (0:00:00.229) 0:09:58.334 ********* 2026-03-10 00:58:22.867172 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-10 00:58:22.867183 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-10 00:58:22.867189 | orchestrator | 2026-03-10 00:58:22.867194 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-10 00:58:22.867200 | orchestrator | Tuesday 10 March 2026 00:56:35 +0000 (0:00:08.850) 0:10:07.185 ********* 2026-03-10 00:58:22.867210 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-10 00:58:22.867216 | orchestrator | 2026-03-10 00:58:22.867222 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-10 00:58:22.867227 | orchestrator | Tuesday 10 March 2026 00:56:39 +0000 (0:00:03.832) 0:10:11.017 ********* 2026-03-10 00:58:22.867233 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:58:22.867239 | orchestrator | 2026-03-10 00:58:22.867245 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-10 00:58:22.867251 | orchestrator | Tuesday 10 March 2026 00:56:40 +0000 (0:00:00.555) 0:10:11.573 ********* 2026-03-10 00:58:22.867256 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-10 00:58:22.867262 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-10 00:58:22.867268 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-10 00:58:22.867274 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-10 00:58:22.867280 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-10 00:58:22.867286 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-10 00:58:22.867291 | orchestrator | 2026-03-10 00:58:22.867297 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-10 00:58:22.867306 | orchestrator | Tuesday 10 March 2026 00:56:41 +0000 (0:00:01.172) 0:10:12.745 ********* 2026-03-10 00:58:22.867312 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 00:58:22.867323 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-10 00:58:22.867329 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-10 00:58:22.867335 | orchestrator | 2026-03-10 00:58:22.867341 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-10 00:58:22.867346 | orchestrator | Tuesday 10 March 2026 00:56:43 +0000 (0:00:02.687) 0:10:15.433 ********* 2026-03-10 00:58:22.867352 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-10 00:58:22.867358 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-10 00:58:22.867364 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:58:22.867370 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-10 00:58:22.867375 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-10 00:58:22.867381 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:58:22.867387 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-10 00:58:22.867393 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-10 00:58:22.867399 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:58:22.867404 | orchestrator | 2026-03-10 00:58:22.867410 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-10 00:58:22.867416 | orchestrator | Tuesday 10 March 2026 00:56:45 +0000 (0:00:01.550) 0:10:16.984 ********* 2026-03-10 00:58:22.867422 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:58:22.867427 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:58:22.867434 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:58:22.867440 | orchestrator | 2026-03-10 00:58:22.867445 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-10 00:58:22.867451 | orchestrator | Tuesday 10 March 2026 00:56:48 +0000 (0:00:02.732) 0:10:19.717 ********* 2026-03-10 00:58:22.867457 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.867462 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.867468 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.867474 | orchestrator | 2026-03-10 00:58:22.867480 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-10 00:58:22.867485 | orchestrator | Tuesday 10 March 2026 00:56:48 +0000 (0:00:00.345) 0:10:20.063 ********* 2026-03-10 00:58:22.867491 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:58:22.867497 | orchestrator | 2026-03-10 00:58:22.867503 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-10 00:58:22.867508 | orchestrator | Tuesday 10 March 2026 00:56:49 +0000 (0:00:00.864) 0:10:20.927 ********* 2026-03-10 00:58:22.867515 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:58:22.867536 | orchestrator | 2026-03-10 00:58:22.867542 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-10 00:58:22.867548 | orchestrator | Tuesday 10 March 2026 00:56:50 +0000 (0:00:00.628) 0:10:21.555 ********* 2026-03-10 00:58:22.867554 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:58:22.867559 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:58:22.867565 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:58:22.867571 | orchestrator | 2026-03-10 00:58:22.867577 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-10 00:58:22.867582 | orchestrator | Tuesday 10 March 2026 00:56:51 +0000 (0:00:01.384) 0:10:22.940 ********* 2026-03-10 00:58:22.867588 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:58:22.867594 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:58:22.867600 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:58:22.867605 | orchestrator | 2026-03-10 00:58:22.867611 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-10 00:58:22.867617 | orchestrator | Tuesday 10 March 2026 00:56:52 +0000 (0:00:01.499) 0:10:24.439 ********* 2026-03-10 00:58:22.867622 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:58:22.867628 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:58:22.867639 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:58:22.867645 | orchestrator | 2026-03-10 00:58:22.867651 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-10 00:58:22.867657 | orchestrator | Tuesday 10 March 2026 00:56:54 +0000 (0:00:02.030) 0:10:26.470 ********* 2026-03-10 00:58:22.867663 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:58:22.867672 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:58:22.867678 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:58:22.867683 | orchestrator | 2026-03-10 00:58:22.867689 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-10 00:58:22.867695 | orchestrator | Tuesday 10 March 2026 00:56:57 +0000 (0:00:02.204) 0:10:28.675 ********* 2026-03-10 00:58:22.867701 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.867707 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.867712 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.867718 | orchestrator | 2026-03-10 00:58:22.867724 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-10 00:58:22.867730 | orchestrator | Tuesday 10 March 2026 00:56:58 +0000 (0:00:01.783) 0:10:30.458 ********* 2026-03-10 00:58:22.867735 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:58:22.867741 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:58:22.867747 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:58:22.867753 | orchestrator | 2026-03-10 00:58:22.867759 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-10 00:58:22.867764 | orchestrator | Tuesday 10 March 2026 00:56:59 +0000 (0:00:01.075) 0:10:31.534 ********* 2026-03-10 00:58:22.867770 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:58:22.867776 | orchestrator | 2026-03-10 00:58:22.867782 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-10 00:58:22.867788 | orchestrator | Tuesday 10 March 2026 00:57:01 +0000 (0:00:01.036) 0:10:32.570 ********* 2026-03-10 00:58:22.867793 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.867799 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.867805 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.867811 | orchestrator | 2026-03-10 00:58:22.867820 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-10 00:58:22.867826 | orchestrator | Tuesday 10 March 2026 00:57:01 +0000 (0:00:00.382) 0:10:32.952 ********* 2026-03-10 00:58:22.867832 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:58:22.867838 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:58:22.867843 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:58:22.867849 | orchestrator | 2026-03-10 00:58:22.867855 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-10 00:58:22.867861 | orchestrator | Tuesday 10 March 2026 00:57:02 +0000 (0:00:01.528) 0:10:34.481 ********* 2026-03-10 00:58:22.867866 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-10 00:58:22.867872 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-10 00:58:22.867878 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-10 00:58:22.867884 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.867890 | orchestrator | 2026-03-10 00:58:22.867895 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-10 00:58:22.867901 | orchestrator | Tuesday 10 March 2026 00:57:03 +0000 (0:00:00.850) 0:10:35.331 ********* 2026-03-10 00:58:22.867907 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.867913 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.867918 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.867924 | orchestrator | 2026-03-10 00:58:22.867930 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-10 00:58:22.867936 | orchestrator | 2026-03-10 00:58:22.867942 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-10 00:58:22.867947 | orchestrator | Tuesday 10 March 2026 00:57:04 +0000 (0:00:00.848) 0:10:36.180 ********* 2026-03-10 00:58:22.867963 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:58:22.867972 | orchestrator | 2026-03-10 00:58:22.867982 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-10 00:58:22.867991 | orchestrator | Tuesday 10 March 2026 00:57:05 +0000 (0:00:00.900) 0:10:37.080 ********* 2026-03-10 00:58:22.867999 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:58:22.868017 | orchestrator | 2026-03-10 00:58:22.868028 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-10 00:58:22.868037 | orchestrator | Tuesday 10 March 2026 00:57:06 +0000 (0:00:01.106) 0:10:38.187 ********* 2026-03-10 00:58:22.868047 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.868056 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.868065 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.868074 | orchestrator | 2026-03-10 00:58:22.868082 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-10 00:58:22.868091 | orchestrator | Tuesday 10 March 2026 00:57:07 +0000 (0:00:00.539) 0:10:38.726 ********* 2026-03-10 00:58:22.868101 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.868111 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.868120 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.868130 | orchestrator | 2026-03-10 00:58:22.868139 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-10 00:58:22.868149 | orchestrator | Tuesday 10 March 2026 00:57:08 +0000 (0:00:01.034) 0:10:39.760 ********* 2026-03-10 00:58:22.868157 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.868167 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.868177 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.868186 | orchestrator | 2026-03-10 00:58:22.868194 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-10 00:58:22.868200 | orchestrator | Tuesday 10 March 2026 00:57:09 +0000 (0:00:01.263) 0:10:41.024 ********* 2026-03-10 00:58:22.868206 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.868212 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.868218 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.868223 | orchestrator | 2026-03-10 00:58:22.868229 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-10 00:58:22.868235 | orchestrator | Tuesday 10 March 2026 00:57:10 +0000 (0:00:00.872) 0:10:41.897 ********* 2026-03-10 00:58:22.868241 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.868246 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.868252 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.868258 | orchestrator | 2026-03-10 00:58:22.868270 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-10 00:58:22.868276 | orchestrator | Tuesday 10 March 2026 00:57:10 +0000 (0:00:00.393) 0:10:42.290 ********* 2026-03-10 00:58:22.868282 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.868288 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.868293 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.868299 | orchestrator | 2026-03-10 00:58:22.868305 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-10 00:58:22.868311 | orchestrator | Tuesday 10 March 2026 00:57:11 +0000 (0:00:00.410) 0:10:42.701 ********* 2026-03-10 00:58:22.868316 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.868322 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.868328 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.868334 | orchestrator | 2026-03-10 00:58:22.868340 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-10 00:58:22.868346 | orchestrator | Tuesday 10 March 2026 00:57:11 +0000 (0:00:00.505) 0:10:43.206 ********* 2026-03-10 00:58:22.868351 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.868357 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.868363 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.868376 | orchestrator | 2026-03-10 00:58:22.868381 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-10 00:58:22.868387 | orchestrator | Tuesday 10 March 2026 00:57:12 +0000 (0:00:00.820) 0:10:44.026 ********* 2026-03-10 00:58:22.868393 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.868399 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.868404 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.868410 | orchestrator | 2026-03-10 00:58:22.868416 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-10 00:58:22.868427 | orchestrator | Tuesday 10 March 2026 00:57:13 +0000 (0:00:00.641) 0:10:44.668 ********* 2026-03-10 00:58:22.868433 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.868439 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.868444 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.868450 | orchestrator | 2026-03-10 00:58:22.868456 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-10 00:58:22.868462 | orchestrator | Tuesday 10 March 2026 00:57:13 +0000 (0:00:00.292) 0:10:44.960 ********* 2026-03-10 00:58:22.868467 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.868473 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.868479 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.868484 | orchestrator | 2026-03-10 00:58:22.868490 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-10 00:58:22.868496 | orchestrator | Tuesday 10 March 2026 00:57:13 +0000 (0:00:00.296) 0:10:45.257 ********* 2026-03-10 00:58:22.868502 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.868507 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.868513 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.868535 | orchestrator | 2026-03-10 00:58:22.868541 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-10 00:58:22.868547 | orchestrator | Tuesday 10 March 2026 00:57:14 +0000 (0:00:00.534) 0:10:45.791 ********* 2026-03-10 00:58:22.868553 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.868559 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.868565 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.868571 | orchestrator | 2026-03-10 00:58:22.868576 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-10 00:58:22.868582 | orchestrator | Tuesday 10 March 2026 00:57:14 +0000 (0:00:00.375) 0:10:46.167 ********* 2026-03-10 00:58:22.868588 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.868594 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.868600 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.868606 | orchestrator | 2026-03-10 00:58:22.868612 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-10 00:58:22.868618 | orchestrator | Tuesday 10 March 2026 00:57:14 +0000 (0:00:00.344) 0:10:46.511 ********* 2026-03-10 00:58:22.868623 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.868629 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.868635 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.868641 | orchestrator | 2026-03-10 00:58:22.868647 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-10 00:58:22.868653 | orchestrator | Tuesday 10 March 2026 00:57:15 +0000 (0:00:00.342) 0:10:46.854 ********* 2026-03-10 00:58:22.868659 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.868664 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.868670 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.868676 | orchestrator | 2026-03-10 00:58:22.868682 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-10 00:58:22.868688 | orchestrator | Tuesday 10 March 2026 00:57:15 +0000 (0:00:00.650) 0:10:47.505 ********* 2026-03-10 00:58:22.868694 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.868700 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.868705 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.868711 | orchestrator | 2026-03-10 00:58:22.868717 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-10 00:58:22.868728 | orchestrator | Tuesday 10 March 2026 00:57:16 +0000 (0:00:00.328) 0:10:47.834 ********* 2026-03-10 00:58:22.868734 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.868740 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.868746 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.868752 | orchestrator | 2026-03-10 00:58:22.868758 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-10 00:58:22.868764 | orchestrator | Tuesday 10 March 2026 00:57:16 +0000 (0:00:00.352) 0:10:48.186 ********* 2026-03-10 00:58:22.868770 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.868776 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.868781 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.868787 | orchestrator | 2026-03-10 00:58:22.868793 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-10 00:58:22.868799 | orchestrator | Tuesday 10 March 2026 00:57:17 +0000 (0:00:00.864) 0:10:49.050 ********* 2026-03-10 00:58:22.868805 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:58:22.868811 | orchestrator | 2026-03-10 00:58:22.868817 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-10 00:58:22.868827 | orchestrator | Tuesday 10 March 2026 00:57:18 +0000 (0:00:00.593) 0:10:49.644 ********* 2026-03-10 00:58:22.868833 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 00:58:22.868839 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-10 00:58:22.868845 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-10 00:58:22.868851 | orchestrator | 2026-03-10 00:58:22.868857 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-10 00:58:22.868862 | orchestrator | Tuesday 10 March 2026 00:57:20 +0000 (0:00:02.338) 0:10:51.982 ********* 2026-03-10 00:58:22.868868 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-10 00:58:22.868874 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-10 00:58:22.868880 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:58:22.868886 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-10 00:58:22.868892 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-10 00:58:22.868898 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:58:22.868903 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-10 00:58:22.868909 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-10 00:58:22.868915 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:58:22.868921 | orchestrator | 2026-03-10 00:58:22.868927 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-10 00:58:22.868933 | orchestrator | Tuesday 10 March 2026 00:57:21 +0000 (0:00:01.504) 0:10:53.486 ********* 2026-03-10 00:58:22.868938 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.868944 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.868955 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.868961 | orchestrator | 2026-03-10 00:58:22.868966 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-10 00:58:22.868972 | orchestrator | Tuesday 10 March 2026 00:57:22 +0000 (0:00:00.397) 0:10:53.884 ********* 2026-03-10 00:58:22.868978 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:58:22.868984 | orchestrator | 2026-03-10 00:58:22.868990 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-10 00:58:22.868996 | orchestrator | Tuesday 10 March 2026 00:57:22 +0000 (0:00:00.568) 0:10:54.452 ********* 2026-03-10 00:58:22.869002 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-10 00:58:22.869008 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-10 00:58:22.869020 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-10 00:58:22.869026 | orchestrator | 2026-03-10 00:58:22.869032 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-10 00:58:22.869038 | orchestrator | Tuesday 10 March 2026 00:57:24 +0000 (0:00:01.458) 0:10:55.911 ********* 2026-03-10 00:58:22.869044 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 00:58:22.869050 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-10 00:58:22.869056 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 00:58:22.869062 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-10 00:58:22.869068 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 00:58:22.869074 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-10 00:58:22.869079 | orchestrator | 2026-03-10 00:58:22.869085 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-10 00:58:22.869091 | orchestrator | Tuesday 10 March 2026 00:57:29 +0000 (0:00:04.694) 0:11:00.606 ********* 2026-03-10 00:58:22.869097 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 00:58:22.869103 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-10 00:58:22.869109 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 00:58:22.869114 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-10 00:58:22.869120 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 00:58:22.869126 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-10 00:58:22.869132 | orchestrator | 2026-03-10 00:58:22.869138 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-10 00:58:22.869143 | orchestrator | Tuesday 10 March 2026 00:57:31 +0000 (0:00:02.597) 0:11:03.203 ********* 2026-03-10 00:58:22.869149 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-10 00:58:22.869155 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:58:22.869161 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-10 00:58:22.869167 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:58:22.869173 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-10 00:58:22.869179 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:58:22.869184 | orchestrator | 2026-03-10 00:58:22.869190 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-10 00:58:22.869199 | orchestrator | Tuesday 10 March 2026 00:57:32 +0000 (0:00:01.274) 0:11:04.478 ********* 2026-03-10 00:58:22.869205 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-10 00:58:22.869211 | orchestrator | 2026-03-10 00:58:22.869217 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-10 00:58:22.869223 | orchestrator | Tuesday 10 March 2026 00:57:33 +0000 (0:00:00.260) 0:11:04.738 ********* 2026-03-10 00:58:22.869229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-10 00:58:22.869236 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-10 00:58:22.869242 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-10 00:58:22.869248 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-10 00:58:22.869258 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-10 00:58:22.869264 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.869270 | orchestrator | 2026-03-10 00:58:22.869276 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-10 00:58:22.869285 | orchestrator | Tuesday 10 March 2026 00:57:34 +0000 (0:00:01.427) 0:11:06.166 ********* 2026-03-10 00:58:22.869291 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-10 00:58:22.869297 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-10 00:58:22.869303 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-10 00:58:22.869309 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-10 00:58:22.869315 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-10 00:58:22.869321 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.869326 | orchestrator | 2026-03-10 00:58:22.869333 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-10 00:58:22.869338 | orchestrator | Tuesday 10 March 2026 00:57:35 +0000 (0:00:00.673) 0:11:06.839 ********* 2026-03-10 00:58:22.869344 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-10 00:58:22.869350 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-10 00:58:22.869356 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-10 00:58:22.869362 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-10 00:58:22.869368 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-10 00:58:22.869374 | orchestrator | 2026-03-10 00:58:22.869380 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-10 00:58:22.869386 | orchestrator | Tuesday 10 March 2026 00:58:07 +0000 (0:00:32.668) 0:11:39.508 ********* 2026-03-10 00:58:22.869391 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.869397 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.869403 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.869409 | orchestrator | 2026-03-10 00:58:22.869415 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-10 00:58:22.869421 | orchestrator | Tuesday 10 March 2026 00:58:08 +0000 (0:00:00.410) 0:11:39.918 ********* 2026-03-10 00:58:22.869427 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.869433 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.869438 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.869444 | orchestrator | 2026-03-10 00:58:22.869450 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-10 00:58:22.869456 | orchestrator | Tuesday 10 March 2026 00:58:08 +0000 (0:00:00.331) 0:11:40.250 ********* 2026-03-10 00:58:22.869462 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:58:22.869468 | orchestrator | 2026-03-10 00:58:22.869474 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-10 00:58:22.869484 | orchestrator | Tuesday 10 March 2026 00:58:09 +0000 (0:00:00.982) 0:11:41.232 ********* 2026-03-10 00:58:22.869490 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:58:22.869496 | orchestrator | 2026-03-10 00:58:22.869505 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-10 00:58:22.869511 | orchestrator | Tuesday 10 March 2026 00:58:10 +0000 (0:00:00.638) 0:11:41.871 ********* 2026-03-10 00:58:22.869552 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:58:22.869560 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:58:22.869566 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:58:22.869572 | orchestrator | 2026-03-10 00:58:22.869578 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-10 00:58:22.869583 | orchestrator | Tuesday 10 March 2026 00:58:11 +0000 (0:00:01.465) 0:11:43.336 ********* 2026-03-10 00:58:22.869589 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:58:22.869595 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:58:22.869601 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:58:22.869607 | orchestrator | 2026-03-10 00:58:22.869613 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-10 00:58:22.869618 | orchestrator | Tuesday 10 March 2026 00:58:13 +0000 (0:00:01.557) 0:11:44.893 ********* 2026-03-10 00:58:22.869624 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:58:22.869630 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:58:22.869636 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:58:22.869642 | orchestrator | 2026-03-10 00:58:22.869647 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-10 00:58:22.869653 | orchestrator | Tuesday 10 March 2026 00:58:15 +0000 (0:00:01.893) 0:11:46.787 ********* 2026-03-10 00:58:22.869659 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-10 00:58:22.869669 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-10 00:58:22.869676 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-10 00:58:22.869682 | orchestrator | 2026-03-10 00:58:22.869688 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-10 00:58:22.869693 | orchestrator | Tuesday 10 March 2026 00:58:17 +0000 (0:00:02.749) 0:11:49.536 ********* 2026-03-10 00:58:22.869699 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.869705 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.869711 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.869717 | orchestrator | 2026-03-10 00:58:22.869723 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-10 00:58:22.869729 | orchestrator | Tuesday 10 March 2026 00:58:18 +0000 (0:00:00.377) 0:11:49.914 ********* 2026-03-10 00:58:22.869734 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:58:22.869740 | orchestrator | 2026-03-10 00:58:22.869745 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-10 00:58:22.869750 | orchestrator | Tuesday 10 March 2026 00:58:18 +0000 (0:00:00.597) 0:11:50.512 ********* 2026-03-10 00:58:22.869755 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.869760 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.869766 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.869771 | orchestrator | 2026-03-10 00:58:22.869776 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-10 00:58:22.869781 | orchestrator | Tuesday 10 March 2026 00:58:19 +0000 (0:00:00.689) 0:11:51.202 ********* 2026-03-10 00:58:22.869786 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.869791 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:58:22.869801 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:58:22.869806 | orchestrator | 2026-03-10 00:58:22.869812 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-10 00:58:22.869817 | orchestrator | Tuesday 10 March 2026 00:58:20 +0000 (0:00:00.357) 0:11:51.559 ********* 2026-03-10 00:58:22.869822 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-10 00:58:22.869827 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-10 00:58:22.869832 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-10 00:58:22.869837 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:58:22.869842 | orchestrator | 2026-03-10 00:58:22.869847 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-10 00:58:22.869853 | orchestrator | Tuesday 10 March 2026 00:58:20 +0000 (0:00:00.747) 0:11:52.307 ********* 2026-03-10 00:58:22.869858 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:58:22.869863 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:58:22.869868 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:58:22.869873 | orchestrator | 2026-03-10 00:58:22.869878 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:58:22.869883 | orchestrator | testbed-node-0 : ok=134  changed=34  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-10 00:58:22.869889 | orchestrator | testbed-node-1 : ok=127  changed=32  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-10 00:58:22.869894 | orchestrator | testbed-node-2 : ok=134  changed=34  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-10 00:58:22.869899 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-10 00:58:22.869904 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-10 00:58:22.869913 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-10 00:58:22.869918 | orchestrator | 2026-03-10 00:58:22.869923 | orchestrator | 2026-03-10 00:58:22.869929 | orchestrator | 2026-03-10 00:58:22.869934 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:58:22.869939 | orchestrator | Tuesday 10 March 2026 00:58:21 +0000 (0:00:00.278) 0:11:52.586 ********* 2026-03-10 00:58:22.869944 | orchestrator | =============================================================================== 2026-03-10 00:58:22.869949 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 46.34s 2026-03-10 00:58:22.869954 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 44.38s 2026-03-10 00:58:22.869960 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.74s 2026-03-10 00:58:22.869965 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 32.67s 2026-03-10 00:58:22.869970 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 17.22s 2026-03-10 00:58:22.869978 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.57s 2026-03-10 00:58:22.869983 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 11.05s 2026-03-10 00:58:22.869988 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.27s 2026-03-10 00:58:22.869993 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.85s 2026-03-10 00:58:22.869998 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.30s 2026-03-10 00:58:22.870007 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.83s 2026-03-10 00:58:22.870037 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 5.91s 2026-03-10 00:58:22.870049 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.68s 2026-03-10 00:58:22.870054 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.83s 2026-03-10 00:58:22.870059 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.69s 2026-03-10 00:58:22.870064 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.25s 2026-03-10 00:58:22.870069 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.95s 2026-03-10 00:58:22.870074 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.83s 2026-03-10 00:58:22.870079 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.43s 2026-03-10 00:58:22.870085 | orchestrator | ceph-mgr : Get keys from monitors --------------------------------------- 3.28s 2026-03-10 00:58:22.870090 | orchestrator | 2026-03-10 00:58:22 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:58:22.870095 | orchestrator | 2026-03-10 00:58:22 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:25.895428 | orchestrator | 2026-03-10 00:58:25 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 00:58:25.897093 | orchestrator | 2026-03-10 00:58:25 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:58:25.898959 | orchestrator | 2026-03-10 00:58:25 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:58:25.899043 | orchestrator | 2026-03-10 00:58:25 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:28.934378 | orchestrator | 2026-03-10 00:58:28 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 00:58:28.935921 | orchestrator | 2026-03-10 00:58:28 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:58:28.937484 | orchestrator | 2026-03-10 00:58:28 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:58:28.937584 | orchestrator | 2026-03-10 00:58:28 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:31.979461 | orchestrator | 2026-03-10 00:58:31 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 00:58:31.984504 | orchestrator | 2026-03-10 00:58:31 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:58:31.985886 | orchestrator | 2026-03-10 00:58:31 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:58:31.985953 | orchestrator | 2026-03-10 00:58:31 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:35.050221 | orchestrator | 2026-03-10 00:58:35 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 00:58:35.052190 | orchestrator | 2026-03-10 00:58:35 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:58:35.055016 | orchestrator | 2026-03-10 00:58:35 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:58:35.055285 | orchestrator | 2026-03-10 00:58:35 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:38.110201 | orchestrator | 2026-03-10 00:58:38 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 00:58:38.111610 | orchestrator | 2026-03-10 00:58:38 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:58:38.113562 | orchestrator | 2026-03-10 00:58:38 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:58:38.113804 | orchestrator | 2026-03-10 00:58:38 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:41.154790 | orchestrator | 2026-03-10 00:58:41 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 00:58:41.155693 | orchestrator | 2026-03-10 00:58:41 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:58:41.156747 | orchestrator | 2026-03-10 00:58:41 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:58:41.156840 | orchestrator | 2026-03-10 00:58:41 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:44.199041 | orchestrator | 2026-03-10 00:58:44 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 00:58:44.200147 | orchestrator | 2026-03-10 00:58:44 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:58:44.202247 | orchestrator | 2026-03-10 00:58:44 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:58:44.202337 | orchestrator | 2026-03-10 00:58:44 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:47.248466 | orchestrator | 2026-03-10 00:58:47 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 00:58:47.249121 | orchestrator | 2026-03-10 00:58:47 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:58:47.250587 | orchestrator | 2026-03-10 00:58:47 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:58:47.250610 | orchestrator | 2026-03-10 00:58:47 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:50.292786 | orchestrator | 2026-03-10 00:58:50 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 00:58:50.293490 | orchestrator | 2026-03-10 00:58:50 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:58:50.295225 | orchestrator | 2026-03-10 00:58:50 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:58:50.295272 | orchestrator | 2026-03-10 00:58:50 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:53.334995 | orchestrator | 2026-03-10 00:58:53 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 00:58:53.336399 | orchestrator | 2026-03-10 00:58:53 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:58:53.337598 | orchestrator | 2026-03-10 00:58:53 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:58:53.337657 | orchestrator | 2026-03-10 00:58:53 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:56.382901 | orchestrator | 2026-03-10 00:58:56 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 00:58:56.385287 | orchestrator | 2026-03-10 00:58:56 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:58:56.386438 | orchestrator | 2026-03-10 00:58:56 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:58:56.386516 | orchestrator | 2026-03-10 00:58:56 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:59.443168 | orchestrator | 2026-03-10 00:58:59 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 00:58:59.444708 | orchestrator | 2026-03-10 00:58:59 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:58:59.447757 | orchestrator | 2026-03-10 00:58:59 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:58:59.447839 | orchestrator | 2026-03-10 00:58:59 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:02.490310 | orchestrator | 2026-03-10 00:59:02 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 00:59:02.492801 | orchestrator | 2026-03-10 00:59:02 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:59:02.496235 | orchestrator | 2026-03-10 00:59:02 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:59:02.496322 | orchestrator | 2026-03-10 00:59:02 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:05.539674 | orchestrator | 2026-03-10 00:59:05 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 00:59:05.539987 | orchestrator | 2026-03-10 00:59:05 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:59:05.540798 | orchestrator | 2026-03-10 00:59:05 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:59:05.540929 | orchestrator | 2026-03-10 00:59:05 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:08.596321 | orchestrator | 2026-03-10 00:59:08 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 00:59:08.598349 | orchestrator | 2026-03-10 00:59:08 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:59:08.600523 | orchestrator | 2026-03-10 00:59:08 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:59:08.600598 | orchestrator | 2026-03-10 00:59:08 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:11.645448 | orchestrator | 2026-03-10 00:59:11 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 00:59:11.647401 | orchestrator | 2026-03-10 00:59:11 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:59:11.649198 | orchestrator | 2026-03-10 00:59:11 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:59:11.649525 | orchestrator | 2026-03-10 00:59:11 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:14.711823 | orchestrator | 2026-03-10 00:59:14 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 00:59:14.714400 | orchestrator | 2026-03-10 00:59:14 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:59:14.716553 | orchestrator | 2026-03-10 00:59:14 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:59:14.716595 | orchestrator | 2026-03-10 00:59:14 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:17.774080 | orchestrator | 2026-03-10 00:59:17 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 00:59:17.774994 | orchestrator | 2026-03-10 00:59:17 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:59:17.776116 | orchestrator | 2026-03-10 00:59:17 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:59:17.776156 | orchestrator | 2026-03-10 00:59:17 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:20.827421 | orchestrator | 2026-03-10 00:59:20 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 00:59:20.828532 | orchestrator | 2026-03-10 00:59:20 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:59:20.830384 | orchestrator | 2026-03-10 00:59:20 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:59:20.830437 | orchestrator | 2026-03-10 00:59:20 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:23.883688 | orchestrator | 2026-03-10 00:59:23 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 00:59:23.885688 | orchestrator | 2026-03-10 00:59:23 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:59:23.895216 | orchestrator | 2026-03-10 00:59:23 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:59:23.895299 | orchestrator | 2026-03-10 00:59:23 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:26.940985 | orchestrator | 2026-03-10 00:59:26 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 00:59:26.944863 | orchestrator | 2026-03-10 00:59:26 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:59:26.949934 | orchestrator | 2026-03-10 00:59:26 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:59:26.950423 | orchestrator | 2026-03-10 00:59:26 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:30.001177 | orchestrator | 2026-03-10 00:59:29 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 00:59:30.004740 | orchestrator | 2026-03-10 00:59:30 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:59:30.007872 | orchestrator | 2026-03-10 00:59:30 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:59:30.007962 | orchestrator | 2026-03-10 00:59:30 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:33.053959 | orchestrator | 2026-03-10 00:59:33 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 00:59:33.056072 | orchestrator | 2026-03-10 00:59:33 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:59:33.059121 | orchestrator | 2026-03-10 00:59:33 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:59:33.059165 | orchestrator | 2026-03-10 00:59:33 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:36.098354 | orchestrator | 2026-03-10 00:59:36 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 00:59:36.101138 | orchestrator | 2026-03-10 00:59:36 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:59:36.102329 | orchestrator | 2026-03-10 00:59:36 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:59:36.102453 | orchestrator | 2026-03-10 00:59:36 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:39.140831 | orchestrator | 2026-03-10 00:59:39 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 00:59:39.141719 | orchestrator | 2026-03-10 00:59:39 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:59:39.143826 | orchestrator | 2026-03-10 00:59:39 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:59:39.144091 | orchestrator | 2026-03-10 00:59:39 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:42.195562 | orchestrator | 2026-03-10 00:59:42 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 00:59:42.196984 | orchestrator | 2026-03-10 00:59:42 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:59:42.199031 | orchestrator | 2026-03-10 00:59:42 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:59:42.199088 | orchestrator | 2026-03-10 00:59:42 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:45.243872 | orchestrator | 2026-03-10 00:59:45 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 00:59:45.245712 | orchestrator | 2026-03-10 00:59:45 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state STARTED 2026-03-10 00:59:45.246422 | orchestrator | 2026-03-10 00:59:45 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state STARTED 2026-03-10 00:59:45.246521 | orchestrator | 2026-03-10 00:59:45 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:48.295329 | orchestrator | 2026-03-10 00:59:48 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 00:59:48.298326 | orchestrator | 2026-03-10 00:59:48 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 00:59:48.301216 | orchestrator | 2026-03-10 00:59:48 | INFO  | Task 5600c403-0dc2-46d1-afe2-55ba62798321 is in state SUCCESS 2026-03-10 00:59:48.303140 | orchestrator | 2026-03-10 00:59:48.303208 | orchestrator | 2026-03-10 00:59:48.303220 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 00:59:48.303230 | orchestrator | 2026-03-10 00:59:48.303239 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 00:59:48.303247 | orchestrator | Tuesday 10 March 2026 00:56:30 +0000 (0:00:00.297) 0:00:00.297 ********* 2026-03-10 00:59:48.303254 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:59:48.303263 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:59:48.303270 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:59:48.303277 | orchestrator | 2026-03-10 00:59:48.303285 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 00:59:48.303292 | orchestrator | Tuesday 10 March 2026 00:56:31 +0000 (0:00:00.347) 0:00:00.644 ********* 2026-03-10 00:59:48.303300 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-10 00:59:48.303308 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-10 00:59:48.303315 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-10 00:59:48.303322 | orchestrator | 2026-03-10 00:59:48.303329 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-10 00:59:48.303336 | orchestrator | 2026-03-10 00:59:48.303343 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-10 00:59:48.303350 | orchestrator | Tuesday 10 March 2026 00:56:31 +0000 (0:00:00.456) 0:00:01.101 ********* 2026-03-10 00:59:48.303357 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:59:48.303365 | orchestrator | 2026-03-10 00:59:48.303372 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-10 00:59:48.303379 | orchestrator | Tuesday 10 March 2026 00:56:32 +0000 (0:00:00.539) 0:00:01.641 ********* 2026-03-10 00:59:48.303386 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-10 00:59:48.303393 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-10 00:59:48.303400 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-10 00:59:48.303407 | orchestrator | 2026-03-10 00:59:48.303415 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-10 00:59:48.303421 | orchestrator | Tuesday 10 March 2026 00:56:32 +0000 (0:00:00.678) 0:00:02.320 ********* 2026-03-10 00:59:48.303431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-10 00:59:48.303517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-10 00:59:48.303540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-10 00:59:48.303552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-10 00:59:48.303561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-10 00:59:48.303574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-10 00:59:48.303589 | orchestrator | 2026-03-10 00:59:48.303596 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-10 00:59:48.303604 | orchestrator | Tuesday 10 March 2026 00:56:34 +0000 (0:00:01.776) 0:00:04.096 ********* 2026-03-10 00:59:48.303611 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:59:48.303618 | orchestrator | 2026-03-10 00:59:48.303626 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-10 00:59:48.303633 | orchestrator | Tuesday 10 March 2026 00:56:35 +0000 (0:00:00.586) 0:00:04.682 ********* 2026-03-10 00:59:48.303646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-10 00:59:48.303654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-10 00:59:48.303662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-10 00:59:48.303678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-10 00:59:48.303692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-10 00:59:48.303701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-10 00:59:48.303709 | orchestrator | 2026-03-10 00:59:48.303717 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-10 00:59:48.303724 | orchestrator | Tuesday 10 March 2026 00:56:37 +0000 (0:00:02.761) 0:00:07.444 ********* 2026-03-10 00:59:48.303732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-10 00:59:48.303748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-10 00:59:48.303756 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:48.303770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-10 00:59:48.303778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-10 00:59:48.303786 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:48.303794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-10 00:59:48.303813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-10 00:59:48.303821 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:48.303828 | orchestrator | 2026-03-10 00:59:48.303836 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-10 00:59:48.303843 | orchestrator | Tuesday 10 March 2026 00:56:39 +0000 (0:00:01.459) 0:00:08.904 ********* 2026-03-10 00:59:48.303855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-10 00:59:48.303864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-10 00:59:48.303872 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:48.303879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-10 00:59:48.303904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-10 00:59:48.303912 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:48.303924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-10 00:59:48.303938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-10 00:59:48.303951 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:48.303964 | orchestrator | 2026-03-10 00:59:48.303976 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-10 00:59:48.303998 | orchestrator | Tuesday 10 March 2026 00:56:40 +0000 (0:00:01.274) 0:00:10.179 ********* 2026-03-10 00:59:48.304012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-10 00:59:48.304032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-10 00:59:48.304046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-10 00:59:48.304062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-10 00:59:48.304071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-10 00:59:48.304089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-10 00:59:48.304097 | orchestrator | 2026-03-10 00:59:48.304105 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-10 00:59:48.304112 | orchestrator | Tuesday 10 March 2026 00:56:43 +0000 (0:00:02.443) 0:00:12.623 ********* 2026-03-10 00:59:48.304120 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:48.304127 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:48.304135 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:48.304142 | orchestrator | 2026-03-10 00:59:48.304149 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-10 00:59:48.304156 | orchestrator | Tuesday 10 March 2026 00:56:46 +0000 (0:00:03.427) 0:00:16.051 ********* 2026-03-10 00:59:48.304164 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:48.304171 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:48.304178 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:48.304185 | orchestrator | 2026-03-10 00:59:48.304192 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-03-10 00:59:48.304199 | orchestrator | Tuesday 10 March 2026 00:56:48 +0000 (0:00:02.300) 0:00:18.351 ********* 2026-03-10 00:59:48.304356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-10 00:59:48.304369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-10 00:59:48.304383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-10 00:59:48.304397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-10 00:59:48.304410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-10 00:59:48.304419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-10 00:59:48.304432 | orchestrator | 2026-03-10 00:59:48.304439 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-10 00:59:48.304447 | orchestrator | Tuesday 10 March 2026 00:56:51 +0000 (0:00:02.616) 0:00:20.967 ********* 2026-03-10 00:59:48.304505 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:48.304515 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:48.304523 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:48.304532 | orchestrator | 2026-03-10 00:59:48.304541 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-10 00:59:48.304549 | orchestrator | Tuesday 10 March 2026 00:56:51 +0000 (0:00:00.331) 0:00:21.299 ********* 2026-03-10 00:59:48.304558 | orchestrator | 2026-03-10 00:59:48.304567 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-10 00:59:48.304575 | orchestrator | Tuesday 10 March 2026 00:56:51 +0000 (0:00:00.066) 0:00:21.365 ********* 2026-03-10 00:59:48.304584 | orchestrator | 2026-03-10 00:59:48.304593 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-10 00:59:48.304601 | orchestrator | Tuesday 10 March 2026 00:56:51 +0000 (0:00:00.071) 0:00:21.437 ********* 2026-03-10 00:59:48.304610 | orchestrator | 2026-03-10 00:59:48.304619 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-10 00:59:48.304628 | orchestrator | Tuesday 10 March 2026 00:56:51 +0000 (0:00:00.074) 0:00:21.511 ********* 2026-03-10 00:59:48.304636 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:48.304645 | orchestrator | 2026-03-10 00:59:48.304654 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-10 00:59:48.304662 | orchestrator | Tuesday 10 March 2026 00:56:52 +0000 (0:00:00.741) 0:00:22.253 ********* 2026-03-10 00:59:48.304671 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:48.304680 | orchestrator | 2026-03-10 00:59:48.304688 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-10 00:59:48.304697 | orchestrator | Tuesday 10 March 2026 00:56:52 +0000 (0:00:00.220) 0:00:22.473 ********* 2026-03-10 00:59:48.304705 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:48.304714 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:48.304723 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:48.304731 | orchestrator | 2026-03-10 00:59:48.304745 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-10 00:59:48.304754 | orchestrator | Tuesday 10 March 2026 00:58:03 +0000 (0:01:10.277) 0:01:32.751 ********* 2026-03-10 00:59:48.304763 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:48.304771 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:48.304780 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:48.304788 | orchestrator | 2026-03-10 00:59:48.304797 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-10 00:59:48.304806 | orchestrator | Tuesday 10 March 2026 00:59:33 +0000 (0:01:30.191) 0:03:02.943 ********* 2026-03-10 00:59:48.304814 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:59:48.304823 | orchestrator | 2026-03-10 00:59:48.304832 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-10 00:59:48.304847 | orchestrator | Tuesday 10 March 2026 00:59:34 +0000 (0:00:00.782) 0:03:03.725 ********* 2026-03-10 00:59:48.304856 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:59:48.304865 | orchestrator | 2026-03-10 00:59:48.304874 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-10 00:59:48.304882 | orchestrator | Tuesday 10 March 2026 00:59:36 +0000 (0:00:02.690) 0:03:06.416 ********* 2026-03-10 00:59:48.304891 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:59:48.304899 | orchestrator | 2026-03-10 00:59:48.304908 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-10 00:59:48.304917 | orchestrator | Tuesday 10 March 2026 00:59:39 +0000 (0:00:02.444) 0:03:08.861 ********* 2026-03-10 00:59:48.304925 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:48.304934 | orchestrator | 2026-03-10 00:59:48.304943 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-10 00:59:48.304953 | orchestrator | Tuesday 10 March 2026 00:59:42 +0000 (0:00:03.247) 0:03:12.108 ********* 2026-03-10 00:59:48.304963 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:48.304973 | orchestrator | 2026-03-10 00:59:48.304988 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:59:48.305000 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-10 00:59:48.305011 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-10 00:59:48.305022 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-10 00:59:48.305032 | orchestrator | 2026-03-10 00:59:48.305041 | orchestrator | 2026-03-10 00:59:48.305051 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:59:48.305061 | orchestrator | Tuesday 10 March 2026 00:59:45 +0000 (0:00:02.644) 0:03:14.752 ********* 2026-03-10 00:59:48.305070 | orchestrator | =============================================================================== 2026-03-10 00:59:48.305080 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 90.19s 2026-03-10 00:59:48.305090 | orchestrator | opensearch : Restart opensearch container ------------------------------ 70.28s 2026-03-10 00:59:48.305099 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.43s 2026-03-10 00:59:48.305109 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.25s 2026-03-10 00:59:48.305119 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.76s 2026-03-10 00:59:48.305129 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.69s 2026-03-10 00:59:48.305138 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.64s 2026-03-10 00:59:48.305148 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.62s 2026-03-10 00:59:48.305158 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.44s 2026-03-10 00:59:48.305167 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.44s 2026-03-10 00:59:48.305177 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.30s 2026-03-10 00:59:48.305186 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.78s 2026-03-10 00:59:48.305197 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.46s 2026-03-10 00:59:48.305206 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.27s 2026-03-10 00:59:48.305216 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.78s 2026-03-10 00:59:48.305226 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 0.74s 2026-03-10 00:59:48.305236 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.68s 2026-03-10 00:59:48.305251 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.59s 2026-03-10 00:59:48.305261 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.54s 2026-03-10 00:59:48.305271 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2026-03-10 00:59:48.305998 | orchestrator | 2026-03-10 00:59:48 | INFO  | Task 15581d8d-b3ca-4c90-9f9d-8195278f383e is in state SUCCESS 2026-03-10 00:59:48.307624 | orchestrator | 2026-03-10 00:59:48.307670 | orchestrator | 2026-03-10 00:59:48.307682 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-03-10 00:59:48.307694 | orchestrator | 2026-03-10 00:59:48.307704 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-10 00:59:48.307729 | orchestrator | Tuesday 10 March 2026 00:56:30 +0000 (0:00:00.094) 0:00:00.094 ********* 2026-03-10 00:59:48.307740 | orchestrator | ok: [localhost] => { 2026-03-10 00:59:48.307752 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-03-10 00:59:48.307763 | orchestrator | } 2026-03-10 00:59:48.307774 | orchestrator | 2026-03-10 00:59:48.307785 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-03-10 00:59:48.307796 | orchestrator | Tuesday 10 March 2026 00:56:30 +0000 (0:00:00.059) 0:00:00.153 ********* 2026-03-10 00:59:48.307806 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-03-10 00:59:48.307818 | orchestrator | ...ignoring 2026-03-10 00:59:48.307828 | orchestrator | 2026-03-10 00:59:48.307838 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-03-10 00:59:48.307849 | orchestrator | Tuesday 10 March 2026 00:56:33 +0000 (0:00:02.863) 0:00:03.017 ********* 2026-03-10 00:59:48.307859 | orchestrator | skipping: [localhost] 2026-03-10 00:59:48.307869 | orchestrator | 2026-03-10 00:59:48.307880 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-03-10 00:59:48.307890 | orchestrator | Tuesday 10 March 2026 00:56:33 +0000 (0:00:00.078) 0:00:03.095 ********* 2026-03-10 00:59:48.307900 | orchestrator | ok: [localhost] 2026-03-10 00:59:48.307910 | orchestrator | 2026-03-10 00:59:48.307920 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 00:59:48.307931 | orchestrator | 2026-03-10 00:59:48.307941 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 00:59:48.307951 | orchestrator | Tuesday 10 March 2026 00:56:33 +0000 (0:00:00.156) 0:00:03.252 ********* 2026-03-10 00:59:48.307961 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:59:48.307972 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:59:48.307982 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:59:48.307992 | orchestrator | 2026-03-10 00:59:48.308002 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 00:59:48.308012 | orchestrator | Tuesday 10 March 2026 00:56:34 +0000 (0:00:00.320) 0:00:03.573 ********* 2026-03-10 00:59:48.308023 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-10 00:59:48.308034 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-10 00:59:48.308044 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-10 00:59:48.308054 | orchestrator | 2026-03-10 00:59:48.308064 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-10 00:59:48.308075 | orchestrator | 2026-03-10 00:59:48.308086 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-10 00:59:48.308096 | orchestrator | Tuesday 10 March 2026 00:56:34 +0000 (0:00:00.614) 0:00:04.188 ********* 2026-03-10 00:59:48.308106 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-10 00:59:48.308117 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-10 00:59:48.308127 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-10 00:59:48.308137 | orchestrator | 2026-03-10 00:59:48.308147 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-10 00:59:48.308178 | orchestrator | Tuesday 10 March 2026 00:56:35 +0000 (0:00:00.399) 0:00:04.588 ********* 2026-03-10 00:59:48.308189 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:59:48.308200 | orchestrator | 2026-03-10 00:59:48.308211 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-10 00:59:48.308222 | orchestrator | Tuesday 10 March 2026 00:56:35 +0000 (0:00:00.743) 0:00:05.332 ********* 2026-03-10 00:59:48.308263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-10 00:59:48.308280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-10 00:59:48.308301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-10 00:59:48.308313 | orchestrator | 2026-03-10 00:59:48.308330 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-10 00:59:48.308341 | orchestrator | Tuesday 10 March 2026 00:56:39 +0000 (0:00:03.128) 0:00:08.460 ********* 2026-03-10 00:59:48.308352 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:48.308375 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:48.308385 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:48.308396 | orchestrator | 2026-03-10 00:59:48.308406 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-10 00:59:48.308446 | orchestrator | Tuesday 10 March 2026 00:56:39 +0000 (0:00:00.863) 0:00:09.324 ********* 2026-03-10 00:59:48.308484 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:48.308495 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:48.308506 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:48.308515 | orchestrator | 2026-03-10 00:59:48.308526 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-10 00:59:48.308536 | orchestrator | Tuesday 10 March 2026 00:56:41 +0000 (0:00:01.721) 0:00:11.046 ********* 2026-03-10 00:59:48.308548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-10 00:59:48.308581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-10 00:59:48.308594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-10 00:59:48.308611 | orchestrator | 2026-03-10 00:59:48.308622 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-10 00:59:48.308632 | orchestrator | Tuesday 10 March 2026 00:56:46 +0000 (0:00:04.541) 0:00:15.587 ********* 2026-03-10 00:59:48.308642 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:48.308652 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:48.308662 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:48.308672 | orchestrator | 2026-03-10 00:59:48.308683 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-10 00:59:48.308693 | orchestrator | Tuesday 10 March 2026 00:56:47 +0000 (0:00:01.222) 0:00:16.810 ********* 2026-03-10 00:59:48.308703 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:48.308713 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:48.308723 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:48.308733 | orchestrator | 2026-03-10 00:59:48.308743 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-10 00:59:48.308753 | orchestrator | Tuesday 10 March 2026 00:56:52 +0000 (0:00:05.016) 0:00:21.827 ********* 2026-03-10 00:59:48.308763 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:59:48.308773 | orchestrator | 2026-03-10 00:59:48.308783 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-10 00:59:48.308793 | orchestrator | Tuesday 10 March 2026 00:56:53 +0000 (0:00:00.588) 0:00:22.415 ********* 2026-03-10 00:59:48.308818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-10 00:59:48.308831 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:48.308842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-10 00:59:48.308862 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:48.308886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-10 00:59:48.308898 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:48.308908 | orchestrator | 2026-03-10 00:59:48.308919 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-10 00:59:48.308929 | orchestrator | Tuesday 10 March 2026 00:56:57 +0000 (0:00:04.248) 0:00:26.663 ********* 2026-03-10 00:59:48.308946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-10 00:59:48.308957 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:48.308979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-10 00:59:48.308991 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:48.309001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-10 00:59:48.309020 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:48.309031 | orchestrator | 2026-03-10 00:59:48.309041 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-10 00:59:48.309051 | orchestrator | Tuesday 10 March 2026 00:57:01 +0000 (0:00:04.054) 0:00:30.718 ********* 2026-03-10 00:59:48.309068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-10 00:59:48.309080 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:48.309096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-10 00:59:48.309115 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:48.309125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-10 00:59:48.309137 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:48.309147 | orchestrator | 2026-03-10 00:59:48.309157 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-03-10 00:59:48.309167 | orchestrator | Tuesday 10 March 2026 00:57:04 +0000 (0:00:03.511) 0:00:34.230 ********* 2026-03-10 00:59:48.309314 | orchestrator | ch2026-03-10 00:59:48 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 00:59:48.309340 | orchestrator | 2026-03-10 00:59:48 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:48.312943 | orchestrator | anged: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-10 00:59:48.312997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-10 00:59:48.313019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-10 00:59:48.313039 | orchestrator | 2026-03-10 00:59:48.313046 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-10 00:59:48.313053 | orchestrator | Tuesday 10 March 2026 00:57:08 +0000 (0:00:03.277) 0:00:37.507 ********* 2026-03-10 00:59:48.313059 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:48.313066 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:48.313072 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:48.313078 | orchestrator | 2026-03-10 00:59:48.313084 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-10 00:59:48.313091 | orchestrator | Tuesday 10 March 2026 00:57:09 +0000 (0:00:00.917) 0:00:38.425 ********* 2026-03-10 00:59:48.313097 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:59:48.313104 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:59:48.313111 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:59:48.313117 | orchestrator | 2026-03-10 00:59:48.313123 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-10 00:59:48.313129 | orchestrator | Tuesday 10 March 2026 00:57:09 +0000 (0:00:00.314) 0:00:38.740 ********* 2026-03-10 00:59:48.313135 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:59:48.313141 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:59:48.313147 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:59:48.313153 | orchestrator | 2026-03-10 00:59:48.313160 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-10 00:59:48.313166 | orchestrator | Tuesday 10 March 2026 00:57:09 +0000 (0:00:00.345) 0:00:39.085 ********* 2026-03-10 00:59:48.313173 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-10 00:59:48.313180 | orchestrator | ...ignoring 2026-03-10 00:59:48.313187 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-10 00:59:48.313193 | orchestrator | ...ignoring 2026-03-10 00:59:48.313200 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-10 00:59:48.313206 | orchestrator | ...ignoring 2026-03-10 00:59:48.313212 | orchestrator | 2026-03-10 00:59:48.313218 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-10 00:59:48.313224 | orchestrator | Tuesday 10 March 2026 00:57:20 +0000 (0:00:10.890) 0:00:49.976 ********* 2026-03-10 00:59:48.313236 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:59:48.313242 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:59:48.313248 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:59:48.313254 | orchestrator | 2026-03-10 00:59:48.313260 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-10 00:59:48.313267 | orchestrator | Tuesday 10 March 2026 00:57:21 +0000 (0:00:00.489) 0:00:50.466 ********* 2026-03-10 00:59:48.313273 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:48.313279 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:48.313285 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:48.313291 | orchestrator | 2026-03-10 00:59:48.313297 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-10 00:59:48.313304 | orchestrator | Tuesday 10 March 2026 00:57:21 +0000 (0:00:00.695) 0:00:51.161 ********* 2026-03-10 00:59:48.313310 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:48.313316 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:48.313322 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:48.313329 | orchestrator | 2026-03-10 00:59:48.313335 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-10 00:59:48.313341 | orchestrator | Tuesday 10 March 2026 00:57:22 +0000 (0:00:00.505) 0:00:51.667 ********* 2026-03-10 00:59:48.313347 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:48.313353 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:48.313359 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:48.313366 | orchestrator | 2026-03-10 00:59:48.313372 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-10 00:59:48.313385 | orchestrator | Tuesday 10 March 2026 00:57:22 +0000 (0:00:00.536) 0:00:52.204 ********* 2026-03-10 00:59:48.313392 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:59:48.313399 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:59:48.313405 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:59:48.313411 | orchestrator | 2026-03-10 00:59:48.313417 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-10 00:59:48.313424 | orchestrator | Tuesday 10 March 2026 00:57:23 +0000 (0:00:00.498) 0:00:52.702 ********* 2026-03-10 00:59:48.313430 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:48.313437 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:48.313443 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:48.313449 | orchestrator | 2026-03-10 00:59:48.313481 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-10 00:59:48.313492 | orchestrator | Tuesday 10 March 2026 00:57:24 +0000 (0:00:00.707) 0:00:53.410 ********* 2026-03-10 00:59:48.313503 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:48.313514 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:48.313524 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-10 00:59:48.313534 | orchestrator | 2026-03-10 00:59:48.313541 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-10 00:59:48.313548 | orchestrator | Tuesday 10 March 2026 00:57:24 +0000 (0:00:00.447) 0:00:53.857 ********* 2026-03-10 00:59:48.313555 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:48.313562 | orchestrator | 2026-03-10 00:59:48.313569 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-10 00:59:48.313576 | orchestrator | Tuesday 10 March 2026 00:57:36 +0000 (0:00:11.684) 0:01:05.541 ********* 2026-03-10 00:59:48.313583 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:59:48.313590 | orchestrator | 2026-03-10 00:59:48.313596 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-10 00:59:48.313603 | orchestrator | Tuesday 10 March 2026 00:57:36 +0000 (0:00:00.128) 0:01:05.669 ********* 2026-03-10 00:59:48.313611 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:48.313618 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:48.313625 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:48.313631 | orchestrator | 2026-03-10 00:59:48.313639 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-10 00:59:48.313654 | orchestrator | Tuesday 10 March 2026 00:57:37 +0000 (0:00:01.059) 0:01:06.729 ********* 2026-03-10 00:59:48.313661 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:48.313668 | orchestrator | 2026-03-10 00:59:48.313675 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-10 00:59:48.313682 | orchestrator | Tuesday 10 March 2026 00:57:45 +0000 (0:00:08.617) 0:01:15.346 ********* 2026-03-10 00:59:48.313689 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:59:48.313696 | orchestrator | 2026-03-10 00:59:48.313703 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-10 00:59:48.313710 | orchestrator | Tuesday 10 March 2026 00:57:48 +0000 (0:00:02.577) 0:01:17.924 ********* 2026-03-10 00:59:48.313717 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:59:48.313724 | orchestrator | 2026-03-10 00:59:48.313731 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-10 00:59:48.313738 | orchestrator | Tuesday 10 March 2026 00:57:51 +0000 (0:00:02.934) 0:01:20.858 ********* 2026-03-10 00:59:48.313745 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:48.313752 | orchestrator | 2026-03-10 00:59:48.313759 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-10 00:59:48.313766 | orchestrator | Tuesday 10 March 2026 00:57:51 +0000 (0:00:00.144) 0:01:21.003 ********* 2026-03-10 00:59:48.313773 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:48.313780 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:48.313787 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:48.313794 | orchestrator | 2026-03-10 00:59:48.313801 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-10 00:59:48.313808 | orchestrator | Tuesday 10 March 2026 00:57:51 +0000 (0:00:00.382) 0:01:21.386 ********* 2026-03-10 00:59:48.313815 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:48.313823 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-10 00:59:48.313829 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:48.313836 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:48.313842 | orchestrator | 2026-03-10 00:59:48.313848 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-10 00:59:48.313854 | orchestrator | skipping: no hosts matched 2026-03-10 00:59:48.313860 | orchestrator | 2026-03-10 00:59:48.313866 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-10 00:59:48.313873 | orchestrator | 2026-03-10 00:59:48.313879 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-10 00:59:48.313885 | orchestrator | Tuesday 10 March 2026 00:57:52 +0000 (0:00:00.618) 0:01:22.004 ********* 2026-03-10 00:59:48.313891 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:48.313897 | orchestrator | 2026-03-10 00:59:48.313904 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-10 00:59:48.313910 | orchestrator | Tuesday 10 March 2026 00:58:11 +0000 (0:00:19.241) 0:01:41.246 ********* 2026-03-10 00:59:48.313916 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:59:48.313922 | orchestrator | 2026-03-10 00:59:48.313928 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-10 00:59:48.313935 | orchestrator | Tuesday 10 March 2026 00:58:27 +0000 (0:00:15.722) 0:01:56.968 ********* 2026-03-10 00:59:48.313941 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:59:48.313947 | orchestrator | 2026-03-10 00:59:48.313953 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-10 00:59:48.313959 | orchestrator | 2026-03-10 00:59:48.313965 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-10 00:59:48.313972 | orchestrator | Tuesday 10 March 2026 00:58:30 +0000 (0:00:02.744) 0:01:59.713 ********* 2026-03-10 00:59:48.313978 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:48.313984 | orchestrator | 2026-03-10 00:59:48.313990 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-10 00:59:48.314004 | orchestrator | Tuesday 10 March 2026 00:58:49 +0000 (0:00:19.211) 0:02:18.924 ********* 2026-03-10 00:59:48.314070 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:59:48.314079 | orchestrator | 2026-03-10 00:59:48.314085 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-10 00:59:48.314091 | orchestrator | Tuesday 10 March 2026 00:59:06 +0000 (0:00:16.597) 0:02:35.522 ********* 2026-03-10 00:59:48.314098 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:59:48.314104 | orchestrator | 2026-03-10 00:59:48.314110 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-10 00:59:48.314116 | orchestrator | 2026-03-10 00:59:48.314122 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-10 00:59:48.314128 | orchestrator | Tuesday 10 March 2026 00:59:08 +0000 (0:00:02.695) 0:02:38.218 ********* 2026-03-10 00:59:48.314135 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:48.314141 | orchestrator | 2026-03-10 00:59:48.314147 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-10 00:59:48.314153 | orchestrator | Tuesday 10 March 2026 00:59:27 +0000 (0:00:18.470) 0:02:56.688 ********* 2026-03-10 00:59:48.314159 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:59:48.314165 | orchestrator | 2026-03-10 00:59:48.314172 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-10 00:59:48.314178 | orchestrator | Tuesday 10 March 2026 00:59:27 +0000 (0:00:00.618) 0:02:57.307 ********* 2026-03-10 00:59:48.314184 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:59:48.314190 | orchestrator | 2026-03-10 00:59:48.314196 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-10 00:59:48.314202 | orchestrator | 2026-03-10 00:59:48.314208 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-10 00:59:48.314215 | orchestrator | Tuesday 10 March 2026 00:59:30 +0000 (0:00:03.025) 0:03:00.333 ********* 2026-03-10 00:59:48.314221 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:59:48.314227 | orchestrator | 2026-03-10 00:59:48.314233 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-10 00:59:48.314239 | orchestrator | Tuesday 10 March 2026 00:59:31 +0000 (0:00:00.592) 0:03:00.925 ********* 2026-03-10 00:59:48.314245 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:48.314252 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:48.314258 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:48.314264 | orchestrator | 2026-03-10 00:59:48.314270 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-10 00:59:48.314276 | orchestrator | Tuesday 10 March 2026 00:59:34 +0000 (0:00:02.596) 0:03:03.522 ********* 2026-03-10 00:59:48.314282 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:48.314288 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:48.314294 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:48.314300 | orchestrator | 2026-03-10 00:59:48.314307 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-10 00:59:48.314313 | orchestrator | Tuesday 10 March 2026 00:59:36 +0000 (0:00:02.538) 0:03:06.060 ********* 2026-03-10 00:59:48.314319 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:48.314325 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:48.314331 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:48.314337 | orchestrator | 2026-03-10 00:59:48.314343 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-10 00:59:48.314349 | orchestrator | Tuesday 10 March 2026 00:59:39 +0000 (0:00:02.566) 0:03:08.626 ********* 2026-03-10 00:59:48.314355 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:48.314362 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:48.314368 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:48.314374 | orchestrator | 2026-03-10 00:59:48.314380 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-10 00:59:48.314386 | orchestrator | Tuesday 10 March 2026 00:59:41 +0000 (0:00:02.628) 0:03:11.254 ********* 2026-03-10 00:59:48.314432 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:59:48.314439 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:59:48.314445 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:59:48.314466 | orchestrator | 2026-03-10 00:59:48.314473 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-10 00:59:48.314479 | orchestrator | Tuesday 10 March 2026 00:59:45 +0000 (0:00:03.659) 0:03:14.914 ********* 2026-03-10 00:59:48.314485 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:48.314492 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:48.314498 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:48.314504 | orchestrator | 2026-03-10 00:59:48.314510 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:59:48.314516 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-10 00:59:48.314523 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-03-10 00:59:48.314531 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-10 00:59:48.314537 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-10 00:59:48.314543 | orchestrator | 2026-03-10 00:59:48.314549 | orchestrator | 2026-03-10 00:59:48.314555 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:59:48.314562 | orchestrator | Tuesday 10 March 2026 00:59:45 +0000 (0:00:00.246) 0:03:15.161 ********* 2026-03-10 00:59:48.314568 | orchestrator | =============================================================================== 2026-03-10 00:59:48.314574 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 38.45s 2026-03-10 00:59:48.314580 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 32.32s 2026-03-10 00:59:48.314596 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 18.47s 2026-03-10 00:59:48.314603 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 11.68s 2026-03-10 00:59:48.314609 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.89s 2026-03-10 00:59:48.314615 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.62s 2026-03-10 00:59:48.314631 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.44s 2026-03-10 00:59:48.314637 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 5.02s 2026-03-10 00:59:48.314643 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.54s 2026-03-10 00:59:48.314657 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 4.25s 2026-03-10 00:59:48.314663 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 4.05s 2026-03-10 00:59:48.314669 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.66s 2026-03-10 00:59:48.314675 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.51s 2026-03-10 00:59:48.314682 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.28s 2026-03-10 00:59:48.314688 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.13s 2026-03-10 00:59:48.314694 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 3.03s 2026-03-10 00:59:48.314700 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.93s 2026-03-10 00:59:48.314706 | orchestrator | Check MariaDB service --------------------------------------------------- 2.86s 2026-03-10 00:59:48.314712 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.63s 2026-03-10 00:59:48.314719 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.60s 2026-03-10 00:59:51.368799 | orchestrator | 2026-03-10 00:59:51 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 00:59:51.371117 | orchestrator | 2026-03-10 00:59:51 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 00:59:51.372751 | orchestrator | 2026-03-10 00:59:51 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 00:59:51.372811 | orchestrator | 2026-03-10 00:59:51 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:54.422562 | orchestrator | 2026-03-10 00:59:54 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 00:59:54.422807 | orchestrator | 2026-03-10 00:59:54 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 00:59:54.424375 | orchestrator | 2026-03-10 00:59:54 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 00:59:54.424775 | orchestrator | 2026-03-10 00:59:54 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:57.480128 | orchestrator | 2026-03-10 00:59:57 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 00:59:57.484210 | orchestrator | 2026-03-10 00:59:57 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 00:59:57.485088 | orchestrator | 2026-03-10 00:59:57 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 00:59:57.485130 | orchestrator | 2026-03-10 00:59:57 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:00.526292 | orchestrator | 2026-03-10 01:00:00 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 01:00:00.526538 | orchestrator | 2026-03-10 01:00:00 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 01:00:00.529824 | orchestrator | 2026-03-10 01:00:00 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:00:00.529927 | orchestrator | 2026-03-10 01:00:00 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:03.577318 | orchestrator | 2026-03-10 01:00:03 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 01:00:03.578961 | orchestrator | 2026-03-10 01:00:03 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 01:00:03.580178 | orchestrator | 2026-03-10 01:00:03 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:00:03.580225 | orchestrator | 2026-03-10 01:00:03 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:06.622965 | orchestrator | 2026-03-10 01:00:06 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 01:00:06.628179 | orchestrator | 2026-03-10 01:00:06 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 01:00:06.629716 | orchestrator | 2026-03-10 01:00:06 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:00:06.629787 | orchestrator | 2026-03-10 01:00:06 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:09.676877 | orchestrator | 2026-03-10 01:00:09 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 01:00:09.677689 | orchestrator | 2026-03-10 01:00:09 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 01:00:09.678600 | orchestrator | 2026-03-10 01:00:09 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:00:09.678773 | orchestrator | 2026-03-10 01:00:09 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:12.724617 | orchestrator | 2026-03-10 01:00:12 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 01:00:12.725230 | orchestrator | 2026-03-10 01:00:12 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 01:00:12.728834 | orchestrator | 2026-03-10 01:00:12 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:00:12.728904 | orchestrator | 2026-03-10 01:00:12 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:15.780793 | orchestrator | 2026-03-10 01:00:15 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 01:00:15.782523 | orchestrator | 2026-03-10 01:00:15 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 01:00:15.783321 | orchestrator | 2026-03-10 01:00:15 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:00:15.783409 | orchestrator | 2026-03-10 01:00:15 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:18.817474 | orchestrator | 2026-03-10 01:00:18 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 01:00:18.819153 | orchestrator | 2026-03-10 01:00:18 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 01:00:18.823427 | orchestrator | 2026-03-10 01:00:18 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:00:18.823494 | orchestrator | 2026-03-10 01:00:18 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:21.880809 | orchestrator | 2026-03-10 01:00:21 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 01:00:21.882756 | orchestrator | 2026-03-10 01:00:21 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 01:00:21.884315 | orchestrator | 2026-03-10 01:00:21 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:00:21.884707 | orchestrator | 2026-03-10 01:00:21 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:24.929380 | orchestrator | 2026-03-10 01:00:24 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 01:00:24.929660 | orchestrator | 2026-03-10 01:00:24 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 01:00:24.931806 | orchestrator | 2026-03-10 01:00:24 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:00:24.931886 | orchestrator | 2026-03-10 01:00:24 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:27.974760 | orchestrator | 2026-03-10 01:00:27 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 01:00:27.976284 | orchestrator | 2026-03-10 01:00:27 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 01:00:27.977216 | orchestrator | 2026-03-10 01:00:27 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:00:27.978079 | orchestrator | 2026-03-10 01:00:27 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:31.022324 | orchestrator | 2026-03-10 01:00:31 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 01:00:31.030119 | orchestrator | 2026-03-10 01:00:31 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 01:00:31.031600 | orchestrator | 2026-03-10 01:00:31 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:00:31.031636 | orchestrator | 2026-03-10 01:00:31 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:34.074918 | orchestrator | 2026-03-10 01:00:34 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 01:00:34.077583 | orchestrator | 2026-03-10 01:00:34 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 01:00:34.080032 | orchestrator | 2026-03-10 01:00:34 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:00:34.080500 | orchestrator | 2026-03-10 01:00:34 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:37.123887 | orchestrator | 2026-03-10 01:00:37 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 01:00:37.123993 | orchestrator | 2026-03-10 01:00:37 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 01:00:37.124010 | orchestrator | 2026-03-10 01:00:37 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:00:37.124022 | orchestrator | 2026-03-10 01:00:37 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:40.168551 | orchestrator | 2026-03-10 01:00:40 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state STARTED 2026-03-10 01:00:40.171601 | orchestrator | 2026-03-10 01:00:40 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 01:00:40.173656 | orchestrator | 2026-03-10 01:00:40 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:00:40.173747 | orchestrator | 2026-03-10 01:00:40 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:43.218949 | orchestrator | 2026-03-10 01:00:43 | INFO  | Task be0f7a25-9ac2-40eb-942d-c25a23f733e8 is in state SUCCESS 2026-03-10 01:00:43.221310 | orchestrator | 2026-03-10 01:00:43.221433 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-10 01:00:43.221484 | orchestrator | 2.16.14 2026-03-10 01:00:43.221492 | orchestrator | 2026-03-10 01:00:43.221499 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-10 01:00:43.221544 | orchestrator | 2026-03-10 01:00:43.221550 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-10 01:00:43.221557 | orchestrator | Tuesday 10 March 2026 00:58:26 +0000 (0:00:00.665) 0:00:00.665 ********* 2026-03-10 01:00:43.221564 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:00:43.221571 | orchestrator | 2026-03-10 01:00:43.221584 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-10 01:00:43.221590 | orchestrator | Tuesday 10 March 2026 00:58:27 +0000 (0:00:00.714) 0:00:01.379 ********* 2026-03-10 01:00:43.221596 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:00:43.221602 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:00:43.221608 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:00:43.221634 | orchestrator | 2026-03-10 01:00:43.221641 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-10 01:00:43.221647 | orchestrator | Tuesday 10 March 2026 00:58:28 +0000 (0:00:00.633) 0:00:02.013 ********* 2026-03-10 01:00:43.221653 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:00:43.221659 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:00:43.221665 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:00:43.221671 | orchestrator | 2026-03-10 01:00:43.221676 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-10 01:00:43.221682 | orchestrator | Tuesday 10 March 2026 00:58:28 +0000 (0:00:00.317) 0:00:02.330 ********* 2026-03-10 01:00:43.221688 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:00:43.221694 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:00:43.221700 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:00:43.221706 | orchestrator | 2026-03-10 01:00:43.221712 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-10 01:00:43.221718 | orchestrator | Tuesday 10 March 2026 00:58:29 +0000 (0:00:00.831) 0:00:03.162 ********* 2026-03-10 01:00:43.221724 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:00:43.221730 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:00:43.221754 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:00:43.221760 | orchestrator | 2026-03-10 01:00:43.221766 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-10 01:00:43.221772 | orchestrator | Tuesday 10 March 2026 00:58:29 +0000 (0:00:00.348) 0:00:03.511 ********* 2026-03-10 01:00:43.221778 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:00:43.221784 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:00:43.221789 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:00:43.221795 | orchestrator | 2026-03-10 01:00:43.221801 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-10 01:00:43.221807 | orchestrator | Tuesday 10 March 2026 00:58:30 +0000 (0:00:00.342) 0:00:03.853 ********* 2026-03-10 01:00:43.221820 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:00:43.221826 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:00:43.221831 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:00:43.221837 | orchestrator | 2026-03-10 01:00:43.221843 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-10 01:00:43.221849 | orchestrator | Tuesday 10 March 2026 00:58:30 +0000 (0:00:00.347) 0:00:04.201 ********* 2026-03-10 01:00:43.221855 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:00:43.221862 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:00:43.221870 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:00:43.221926 | orchestrator | 2026-03-10 01:00:43.221937 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-10 01:00:43.221947 | orchestrator | Tuesday 10 March 2026 00:58:31 +0000 (0:00:00.549) 0:00:04.750 ********* 2026-03-10 01:00:43.221957 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:00:43.221967 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:00:43.221976 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:00:43.221998 | orchestrator | 2026-03-10 01:00:43.222118 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-10 01:00:43.222129 | orchestrator | Tuesday 10 March 2026 00:58:31 +0000 (0:00:00.318) 0:00:05.068 ********* 2026-03-10 01:00:43.222139 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-10 01:00:43.222151 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-10 01:00:43.222161 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-10 01:00:43.222173 | orchestrator | 2026-03-10 01:00:43.222198 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-10 01:00:43.222210 | orchestrator | Tuesday 10 March 2026 00:58:32 +0000 (0:00:00.695) 0:00:05.764 ********* 2026-03-10 01:00:43.222220 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:00:43.222230 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:00:43.222240 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:00:43.222250 | orchestrator | 2026-03-10 01:00:43.222261 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-10 01:00:43.222271 | orchestrator | Tuesday 10 March 2026 00:58:32 +0000 (0:00:00.492) 0:00:06.257 ********* 2026-03-10 01:00:43.222282 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-10 01:00:43.222292 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-10 01:00:43.222304 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-10 01:00:43.222315 | orchestrator | 2026-03-10 01:00:43.222325 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-10 01:00:43.222336 | orchestrator | Tuesday 10 March 2026 00:58:34 +0000 (0:00:02.222) 0:00:08.479 ********* 2026-03-10 01:00:43.222346 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-10 01:00:43.222357 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-10 01:00:43.222367 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-10 01:00:43.222379 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:00:43.222389 | orchestrator | 2026-03-10 01:00:43.222424 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-10 01:00:43.222436 | orchestrator | Tuesday 10 March 2026 00:58:35 +0000 (0:00:00.687) 0:00:09.167 ********* 2026-03-10 01:00:43.222465 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.222479 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.222489 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.222497 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:00:43.222507 | orchestrator | 2026-03-10 01:00:43.222516 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-10 01:00:43.222525 | orchestrator | Tuesday 10 March 2026 00:58:36 +0000 (0:00:00.890) 0:00:10.057 ********* 2026-03-10 01:00:43.222537 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.222549 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.222560 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.222569 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:00:43.222579 | orchestrator | 2026-03-10 01:00:43.222589 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-10 01:00:43.222599 | orchestrator | Tuesday 10 March 2026 00:58:36 +0000 (0:00:00.403) 0:00:10.461 ********* 2026-03-10 01:00:43.222616 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '1a6e84675835', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-10 00:58:33.268492', 'end': '2026-03-10 00:58:33.309541', 'delta': '0:00:00.041049', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1a6e84675835'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-10 01:00:43.222630 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '0e884d4b7100', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-10 00:58:34.045581', 'end': '2026-03-10 00:58:34.083469', 'delta': '0:00:00.037888', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['0e884d4b7100'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-10 01:00:43.222657 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '8ee96f742264', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-10 00:58:34.605451', 'end': '2026-03-10 00:58:34.649872', 'delta': '0:00:00.044421', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8ee96f742264'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-10 01:00:43.222668 | orchestrator | 2026-03-10 01:00:43.222678 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-10 01:00:43.222689 | orchestrator | Tuesday 10 March 2026 00:58:37 +0000 (0:00:00.251) 0:00:10.712 ********* 2026-03-10 01:00:43.222699 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:00:43.222708 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:00:43.222718 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:00:43.222727 | orchestrator | 2026-03-10 01:00:43.222737 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-10 01:00:43.222746 | orchestrator | Tuesday 10 March 2026 00:58:37 +0000 (0:00:00.522) 0:00:11.235 ********* 2026-03-10 01:00:43.222756 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-10 01:00:43.222766 | orchestrator | 2026-03-10 01:00:43.222775 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-10 01:00:43.222785 | orchestrator | Tuesday 10 March 2026 00:58:39 +0000 (0:00:02.092) 0:00:13.328 ********* 2026-03-10 01:00:43.222795 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:00:43.222804 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:00:43.222814 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:00:43.222824 | orchestrator | 2026-03-10 01:00:43.222833 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-10 01:00:43.222843 | orchestrator | Tuesday 10 March 2026 00:58:39 +0000 (0:00:00.317) 0:00:13.645 ********* 2026-03-10 01:00:43.222852 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:00:43.222862 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:00:43.222872 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:00:43.222881 | orchestrator | 2026-03-10 01:00:43.222891 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-10 01:00:43.222899 | orchestrator | Tuesday 10 March 2026 00:58:40 +0000 (0:00:00.495) 0:00:14.141 ********* 2026-03-10 01:00:43.222907 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:00:43.222915 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:00:43.222925 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:00:43.222934 | orchestrator | 2026-03-10 01:00:43.222943 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-10 01:00:43.222951 | orchestrator | Tuesday 10 March 2026 00:58:40 +0000 (0:00:00.543) 0:00:14.685 ********* 2026-03-10 01:00:43.222960 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:00:43.222969 | orchestrator | 2026-03-10 01:00:43.222978 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-10 01:00:43.222987 | orchestrator | Tuesday 10 March 2026 00:58:41 +0000 (0:00:00.131) 0:00:14.817 ********* 2026-03-10 01:00:43.222997 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:00:43.223006 | orchestrator | 2026-03-10 01:00:43.223021 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-10 01:00:43.223030 | orchestrator | Tuesday 10 March 2026 00:58:41 +0000 (0:00:00.246) 0:00:15.063 ********* 2026-03-10 01:00:43.223039 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:00:43.223049 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:00:43.223059 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:00:43.223068 | orchestrator | 2026-03-10 01:00:43.223078 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-10 01:00:43.223087 | orchestrator | Tuesday 10 March 2026 00:58:41 +0000 (0:00:00.297) 0:00:15.361 ********* 2026-03-10 01:00:43.223096 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:00:43.223104 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:00:43.223113 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:00:43.223122 | orchestrator | 2026-03-10 01:00:43.223135 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-10 01:00:43.223145 | orchestrator | Tuesday 10 March 2026 00:58:42 +0000 (0:00:00.343) 0:00:15.704 ********* 2026-03-10 01:00:43.223153 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:00:43.223162 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:00:43.223171 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:00:43.223181 | orchestrator | 2026-03-10 01:00:43.223190 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-10 01:00:43.223199 | orchestrator | Tuesday 10 March 2026 00:58:42 +0000 (0:00:00.619) 0:00:16.323 ********* 2026-03-10 01:00:43.223208 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:00:43.223217 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:00:43.223226 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:00:43.223236 | orchestrator | 2026-03-10 01:00:43.223246 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-10 01:00:43.223255 | orchestrator | Tuesday 10 March 2026 00:58:42 +0000 (0:00:00.351) 0:00:16.674 ********* 2026-03-10 01:00:43.223265 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:00:43.223275 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:00:43.223285 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:00:43.223295 | orchestrator | 2026-03-10 01:00:43.223304 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-10 01:00:43.223314 | orchestrator | Tuesday 10 March 2026 00:58:43 +0000 (0:00:00.325) 0:00:17.000 ********* 2026-03-10 01:00:43.223323 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:00:43.223333 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:00:43.223342 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:00:43.223367 | orchestrator | 2026-03-10 01:00:43.223378 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-10 01:00:43.223408 | orchestrator | Tuesday 10 March 2026 00:58:43 +0000 (0:00:00.333) 0:00:17.334 ********* 2026-03-10 01:00:43.223414 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:00:43.223420 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:00:43.223426 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:00:43.223432 | orchestrator | 2026-03-10 01:00:43.223438 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-10 01:00:43.223495 | orchestrator | Tuesday 10 March 2026 00:58:44 +0000 (0:00:00.544) 0:00:17.878 ********* 2026-03-10 01:00:43.223503 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--120d91ae--c06d--5ca9--b450--85f2d491e96a-osd--block--120d91ae--c06d--5ca9--b450--85f2d491e96a', 'dm-uuid-LVM-WfmIIUFFJw2jaM2wZ94MbXTIU1Q3uideiEjkxN1GdAfLt9tXghZfQML4bXOjdvSs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-10 01:00:43.223511 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--07a8a029--b5c8--5530--8cc4--5b47064bbf55-osd--block--07a8a029--b5c8--5530--8cc4--5b47064bbf55', 'dm-uuid-LVM-oOevMLZLCWnJUTHGrEuKA1BjH5ndFznrD7OJhL26FbW5qogkNfLj60PsbnIbd0ju'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-10 01:00:43.223525 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:00:43.223532 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:00:43.223538 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:00:43.223549 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:00:43.223555 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:00:43.223567 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:00:43.223574 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:00:43.223580 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ba4e8e90--9c8a--5143--9418--e7ec5f1bd32d-osd--block--ba4e8e90--9c8a--5143--9418--e7ec5f1bd32d', 'dm-uuid-LVM-58MU5grZlunTSBffmwjK3vjz0g18XyyLY7eFQxOxvS4FOsGnTwrKX832BjExbi3v'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-10 01:00:43.223590 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:00:43.223601 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf', 'scsi-SQEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf-part1', 'scsi-SQEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf-part14', 'scsi-SQEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf-part15', 'scsi-SQEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf-part16', 'scsi-SQEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:00:43.223614 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e8bae358--0d63--5788--ab6b--8bf409d6bda1-osd--block--e8bae358--0d63--5788--ab6b--8bf409d6bda1', 'dm-uuid-LVM-GuL0AeHVbbPblhWrBdLlyHKriwiZzQrZ4wTuRxRp3e6akvf3J1KcLrsLm9c2Jl40'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-10 01:00:43.223621 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--120d91ae--c06d--5ca9--b450--85f2d491e96a-osd--block--120d91ae--c06d--5ca9--b450--85f2d491e96a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EcjQSV-zkQb-m0mE-rsXO-uEtO-mPLD-c47yw4', 'scsi-0QEMU_QEMU_HARDDISK_a252bbef-4467-4af4-a387-4994b1c9e49a', 'scsi-SQEMU_QEMU_HARDDISK_a252bbef-4467-4af4-a387-4994b1c9e49a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:00:43.223631 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:00:43.223637 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--07a8a029--b5c8--5530--8cc4--5b47064bbf55-osd--block--07a8a029--b5c8--5530--8cc4--5b47064bbf55'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2KaSBt-SqHa-oUG8-yCxe-3388-hb3b-0vmN9g', 'scsi-0QEMU_QEMU_HARDDISK_f86d111d-1a96-4282-a6fb-aea85f8e4c5d', 'scsi-SQEMU_QEMU_HARDDISK_f86d111d-1a96-4282-a6fb-aea85f8e4c5d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:00:43.223643 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:00:43.223653 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c217fde-a42a-4606-a0be-96745b6d50a1', 'scsi-SQEMU_QEMU_HARDDISK_0c217fde-a42a-4606-a0be-96745b6d50a1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:00:43.223670 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:00:43.223680 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-10-00-02-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:00:43.223686 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:00:43.223697 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:00:43.223703 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:00:43.223709 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:00:43.223715 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:00:43.223733 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8', 'scsi-SQEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8-part1', 'scsi-SQEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8-part14', 'scsi-SQEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8-part15', 'scsi-SQEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8-part16', 'scsi-SQEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:00:43.223741 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ba4e8e90--9c8a--5143--9418--e7ec5f1bd32d-osd--block--ba4e8e90--9c8a--5143--9418--e7ec5f1bd32d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uFyQk7-vg9B-ByOU-mflV-hSyH-sHKs-jpRec5', 'scsi-0QEMU_QEMU_HARDDISK_1d3a34ea-f16d-4f10-8269-5937a58b6a14', 'scsi-SQEMU_QEMU_HARDDISK_1d3a34ea-f16d-4f10-8269-5937a58b6a14'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:00:43.223752 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:00:43.223758 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--e8bae358--0d63--5788--ab6b--8bf409d6bda1-osd--block--e8bae358--0d63--5788--ab6b--8bf409d6bda1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fgGVlE-qFyq-X1v1-PNp2-Pgr0-sUfs-zMpfLG', 'scsi-0QEMU_QEMU_HARDDISK_b7d8aa34-d63a-4976-a853-b9d2680122e0', 'scsi-SQEMU_QEMU_HARDDISK_b7d8aa34-d63a-4976-a853-b9d2680122e0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:00:43.223765 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_497bc817-8b42-47c9-935c-36bd3332f08b', 'scsi-SQEMU_QEMU_HARDDISK_497bc817-8b42-47c9-935c-36bd3332f08b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:00:43.223771 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-10-00-02-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:00:43.223780 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:00:43.223786 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c0742eba--6300--5cfa--b498--a3704e14c384-osd--block--c0742eba--6300--5cfa--b498--a3704e14c384', 'dm-uuid-LVM-GM8tC80SUSXkY6Qfq6Ug21NaheiJUcGkkVa35BA8c8B9VfexNV4oAMnIiqhJM006'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-10 01:00:43.223797 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--45abfd4e--fefd--5ba8--aea8--e55d74ffeda2-osd--block--45abfd4e--fefd--5ba8--aea8--e55d74ffeda2', 'dm-uuid-LVM-4k4kWJfxNe70XsJuzSaKOwSI0cLsfXJ7e6TWSi3ulBkofIuygrkM5QKQOfYvIse0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-10 01:00:43.223807 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:00:43.223813 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:00:43.223819 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:00:43.223825 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:00:43.223831 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:00:43.223837 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:00:43.223846 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:00:43.223852 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:00:43.223863 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb', 'scsi-SQEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb-part1', 'scsi-SQEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb-part14', 'scsi-SQEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb-part15', 'scsi-SQEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb-part16', 'scsi-SQEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:00:43.223874 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c0742eba--6300--5cfa--b498--a3704e14c384-osd--block--c0742eba--6300--5cfa--b498--a3704e14c384'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FD9Jdg-KTpo-TrBO-XfEZ-qATc-twnM-Vnsrfh', 'scsi-0QEMU_QEMU_HARDDISK_fbc5b701-e3a2-4a57-9c09-bea5a2018a77', 'scsi-SQEMU_QEMU_HARDDISK_fbc5b701-e3a2-4a57-9c09-bea5a2018a77'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:00:43.223885 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--45abfd4e--fefd--5ba8--aea8--e55d74ffeda2-osd--block--45abfd4e--fefd--5ba8--aea8--e55d74ffeda2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ioGbpd-C8RH-QMZj-HqXN-GzQI-YX9i-rUnDFY', 'scsi-0QEMU_QEMU_HARDDISK_01fdf314-9dac-4cf9-86b2-8624031a3730', 'scsi-SQEMU_QEMU_HARDDISK_01fdf314-9dac-4cf9-86b2-8624031a3730'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:00:43.223891 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1827d390-92d5-42dc-b1df-e99337d10b88', 'scsi-SQEMU_QEMU_HARDDISK_1827d390-92d5-42dc-b1df-e99337d10b88'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:00:43.223901 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-10-00-02-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:00:43.223912 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:00:43.223918 | orchestrator | 2026-03-10 01:00:43.223924 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-10 01:00:43.223930 | orchestrator | Tuesday 10 March 2026 00:58:44 +0000 (0:00:00.685) 0:00:18.564 ********* 2026-03-10 01:00:43.223937 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--120d91ae--c06d--5ca9--b450--85f2d491e96a-osd--block--120d91ae--c06d--5ca9--b450--85f2d491e96a', 'dm-uuid-LVM-WfmIIUFFJw2jaM2wZ94MbXTIU1Q3uideiEjkxN1GdAfLt9tXghZfQML4bXOjdvSs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.223944 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--07a8a029--b5c8--5530--8cc4--5b47064bbf55-osd--block--07a8a029--b5c8--5530--8cc4--5b47064bbf55', 'dm-uuid-LVM-oOevMLZLCWnJUTHGrEuKA1BjH5ndFznrD7OJhL26FbW5qogkNfLj60PsbnIbd0ju'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.223950 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.223959 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.223966 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.223981 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.223988 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.223994 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224000 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224006 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ba4e8e90--9c8a--5143--9418--e7ec5f1bd32d-osd--block--ba4e8e90--9c8a--5143--9418--e7ec5f1bd32d', 'dm-uuid-LVM-58MU5grZlunTSBffmwjK3vjz0g18XyyLY7eFQxOxvS4FOsGnTwrKX832BjExbi3v'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224014 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224028 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e8bae358--0d63--5788--ab6b--8bf409d6bda1-osd--block--e8bae358--0d63--5788--ab6b--8bf409d6bda1', 'dm-uuid-LVM-GuL0AeHVbbPblhWrBdLlyHKriwiZzQrZ4wTuRxRp3e6akvf3J1KcLrsLm9c2Jl40'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224035 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf', 'scsi-SQEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf-part1', 'scsi-SQEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf-part14', 'scsi-SQEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf-part15', 'scsi-SQEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf-part16', 'scsi-SQEMU_QEMU_HARDDISK_25d57d94-2695-4aa5-876f-38a57276d3cf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224046 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--120d91ae--c06d--5ca9--b450--85f2d491e96a-osd--block--120d91ae--c06d--5ca9--b450--85f2d491e96a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EcjQSV-zkQb-m0mE-rsXO-uEtO-mPLD-c47yw4', 'scsi-0QEMU_QEMU_HARDDISK_a252bbef-4467-4af4-a387-4994b1c9e49a', 'scsi-SQEMU_QEMU_HARDDISK_a252bbef-4467-4af4-a387-4994b1c9e49a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224060 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224067 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--07a8a029--b5c8--5530--8cc4--5b47064bbf55-osd--block--07a8a029--b5c8--5530--8cc4--5b47064bbf55'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2KaSBt-SqHa-oUG8-yCxe-3388-hb3b-0vmN9g', 'scsi-0QEMU_QEMU_HARDDISK_f86d111d-1a96-4282-a6fb-aea85f8e4c5d', 'scsi-SQEMU_QEMU_HARDDISK_f86d111d-1a96-4282-a6fb-aea85f8e4c5d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224073 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224082 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c217fde-a42a-4606-a0be-96745b6d50a1', 'scsi-SQEMU_QEMU_HARDDISK_0c217fde-a42a-4606-a0be-96745b6d50a1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224096 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224106 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-10-00-02-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224129 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224138 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224147 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224155 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224165 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224185 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8', 'scsi-SQEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8-part1', 'scsi-SQEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8-part14', 'scsi-SQEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8-part15', 'scsi-SQEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8-part16', 'scsi-SQEMU_QEMU_HARDDISK_513ba897-4681-4617-82d1-e2531ece3de8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224201 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:00:43.224211 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ba4e8e90--9c8a--5143--9418--e7ec5f1bd32d-osd--block--ba4e8e90--9c8a--5143--9418--e7ec5f1bd32d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uFyQk7-vg9B-ByOU-mflV-hSyH-sHKs-jpRec5', 'scsi-0QEMU_QEMU_HARDDISK_1d3a34ea-f16d-4f10-8269-5937a58b6a14', 'scsi-SQEMU_QEMU_HARDDISK_1d3a34ea-f16d-4f10-8269-5937a58b6a14'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224222 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--e8bae358--0d63--5788--ab6b--8bf409d6bda1-osd--block--e8bae358--0d63--5788--ab6b--8bf409d6bda1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fgGVlE-qFyq-X1v1-PNp2-Pgr0-sUfs-zMpfLG', 'scsi-0QEMU_QEMU_HARDDISK_b7d8aa34-d63a-4976-a853-b9d2680122e0', 'scsi-SQEMU_QEMU_HARDDISK_b7d8aa34-d63a-4976-a853-b9d2680122e0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224242 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_497bc817-8b42-47c9-935c-36bd3332f08b', 'scsi-SQEMU_QEMU_HARDDISK_497bc817-8b42-47c9-935c-36bd3332f08b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224258 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-10-00-02-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224267 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:00:43.224278 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c0742eba--6300--5cfa--b498--a3704e14c384-osd--block--c0742eba--6300--5cfa--b498--a3704e14c384', 'dm-uuid-LVM-GM8tC80SUSXkY6Qfq6Ug21NaheiJUcGkkVa35BA8c8B9VfexNV4oAMnIiqhJM006'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224287 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--45abfd4e--fefd--5ba8--aea8--e55d74ffeda2-osd--block--45abfd4e--fefd--5ba8--aea8--e55d74ffeda2', 'dm-uuid-LVM-4k4kWJfxNe70XsJuzSaKOwSI0cLsfXJ7e6TWSi3ulBkofIuygrkM5QKQOfYvIse0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224297 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224312 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224329 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224346 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224357 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224367 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224377 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224387 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224417 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb', 'scsi-SQEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb-part1', 'scsi-SQEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb-part14', 'scsi-SQEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb-part15', 'scsi-SQEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb-part16', 'scsi-SQEMU_QEMU_HARDDISK_b9ca93df-9930-4281-85b9-8a08fee9dbfb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224429 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c0742eba--6300--5cfa--b498--a3704e14c384-osd--block--c0742eba--6300--5cfa--b498--a3704e14c384'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FD9Jdg-KTpo-TrBO-XfEZ-qATc-twnM-Vnsrfh', 'scsi-0QEMU_QEMU_HARDDISK_fbc5b701-e3a2-4a57-9c09-bea5a2018a77', 'scsi-SQEMU_QEMU_HARDDISK_fbc5b701-e3a2-4a57-9c09-bea5a2018a77'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224438 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--45abfd4e--fefd--5ba8--aea8--e55d74ffeda2-osd--block--45abfd4e--fefd--5ba8--aea8--e55d74ffeda2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ioGbpd-C8RH-QMZj-HqXN-GzQI-YX9i-rUnDFY', 'scsi-0QEMU_QEMU_HARDDISK_01fdf314-9dac-4cf9-86b2-8624031a3730', 'scsi-SQEMU_QEMU_HARDDISK_01fdf314-9dac-4cf9-86b2-8624031a3730'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224503 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1827d390-92d5-42dc-b1df-e99337d10b88', 'scsi-SQEMU_QEMU_HARDDISK_1827d390-92d5-42dc-b1df-e99337d10b88'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224625 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-10-00-02-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:00:43.224635 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:00:43.224641 | orchestrator | 2026-03-10 01:00:43.224647 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-10 01:00:43.224654 | orchestrator | Tuesday 10 March 2026 00:58:45 +0000 (0:00:00.742) 0:00:19.307 ********* 2026-03-10 01:00:43.224660 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:00:43.224666 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:00:43.224672 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:00:43.224678 | orchestrator | 2026-03-10 01:00:43.224684 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-10 01:00:43.224689 | orchestrator | Tuesday 10 March 2026 00:58:46 +0000 (0:00:00.678) 0:00:19.985 ********* 2026-03-10 01:00:43.224695 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:00:43.224701 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:00:43.224706 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:00:43.224712 | orchestrator | 2026-03-10 01:00:43.224718 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-10 01:00:43.224724 | orchestrator | Tuesday 10 March 2026 00:58:46 +0000 (0:00:00.551) 0:00:20.537 ********* 2026-03-10 01:00:43.224730 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:00:43.224735 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:00:43.224740 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:00:43.224746 | orchestrator | 2026-03-10 01:00:43.224751 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-10 01:00:43.224756 | orchestrator | Tuesday 10 March 2026 00:58:47 +0000 (0:00:00.647) 0:00:21.184 ********* 2026-03-10 01:00:43.224762 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:00:43.224767 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:00:43.224773 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:00:43.224778 | orchestrator | 2026-03-10 01:00:43.224787 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-10 01:00:43.224797 | orchestrator | Tuesday 10 March 2026 00:58:47 +0000 (0:00:00.327) 0:00:21.512 ********* 2026-03-10 01:00:43.224813 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:00:43.224822 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:00:43.224830 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:00:43.224839 | orchestrator | 2026-03-10 01:00:43.224849 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-10 01:00:43.224857 | orchestrator | Tuesday 10 March 2026 00:58:48 +0000 (0:00:00.493) 0:00:22.005 ********* 2026-03-10 01:00:43.224865 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:00:43.224875 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:00:43.224883 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:00:43.224892 | orchestrator | 2026-03-10 01:00:43.224901 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-10 01:00:43.224912 | orchestrator | Tuesday 10 March 2026 00:58:48 +0000 (0:00:00.582) 0:00:22.588 ********* 2026-03-10 01:00:43.224922 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-10 01:00:43.224932 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-10 01:00:43.224942 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-10 01:00:43.224951 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-10 01:00:43.224961 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-10 01:00:43.224972 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-10 01:00:43.224980 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-10 01:00:43.224985 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-10 01:00:43.224990 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-10 01:00:43.224996 | orchestrator | 2026-03-10 01:00:43.225001 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-10 01:00:43.225007 | orchestrator | Tuesday 10 March 2026 00:58:49 +0000 (0:00:00.935) 0:00:23.524 ********* 2026-03-10 01:00:43.225012 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-10 01:00:43.225018 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-10 01:00:43.225028 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-10 01:00:43.225034 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:00:43.225039 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-10 01:00:43.225044 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-10 01:00:43.225050 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-10 01:00:43.225055 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:00:43.225060 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-10 01:00:43.225066 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-10 01:00:43.225071 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-10 01:00:43.225077 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:00:43.225082 | orchestrator | 2026-03-10 01:00:43.225088 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-10 01:00:43.225093 | orchestrator | Tuesday 10 March 2026 00:58:50 +0000 (0:00:00.401) 0:00:23.925 ********* 2026-03-10 01:00:43.225099 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:00:43.225105 | orchestrator | 2026-03-10 01:00:43.225110 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-10 01:00:43.225117 | orchestrator | Tuesday 10 March 2026 00:58:51 +0000 (0:00:00.779) 0:00:24.705 ********* 2026-03-10 01:00:43.225128 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:00:43.225133 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:00:43.225139 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:00:43.225144 | orchestrator | 2026-03-10 01:00:43.225149 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-10 01:00:43.225155 | orchestrator | Tuesday 10 March 2026 00:58:51 +0000 (0:00:00.348) 0:00:25.053 ********* 2026-03-10 01:00:43.225166 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:00:43.225172 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:00:43.225177 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:00:43.225182 | orchestrator | 2026-03-10 01:00:43.225188 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-10 01:00:43.225193 | orchestrator | Tuesday 10 March 2026 00:58:51 +0000 (0:00:00.331) 0:00:25.385 ********* 2026-03-10 01:00:43.225198 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:00:43.225204 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:00:43.225209 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:00:43.225214 | orchestrator | 2026-03-10 01:00:43.225220 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-10 01:00:43.225225 | orchestrator | Tuesday 10 March 2026 00:58:52 +0000 (0:00:00.336) 0:00:25.721 ********* 2026-03-10 01:00:43.225230 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:00:43.225236 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:00:43.225241 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:00:43.225246 | orchestrator | 2026-03-10 01:00:43.225252 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-10 01:00:43.225257 | orchestrator | Tuesday 10 March 2026 00:58:53 +0000 (0:00:01.044) 0:00:26.766 ********* 2026-03-10 01:00:43.225264 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-10 01:00:43.225270 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-10 01:00:43.225276 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-10 01:00:43.225283 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:00:43.225288 | orchestrator | 2026-03-10 01:00:43.225294 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-10 01:00:43.225301 | orchestrator | Tuesday 10 March 2026 00:58:53 +0000 (0:00:00.382) 0:00:27.148 ********* 2026-03-10 01:00:43.225307 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-10 01:00:43.225313 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-10 01:00:43.225319 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-10 01:00:43.225325 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:00:43.225331 | orchestrator | 2026-03-10 01:00:43.225338 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-10 01:00:43.225344 | orchestrator | Tuesday 10 March 2026 00:58:53 +0000 (0:00:00.396) 0:00:27.545 ********* 2026-03-10 01:00:43.225351 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-10 01:00:43.225356 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-10 01:00:43.225361 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-10 01:00:43.225367 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:00:43.225372 | orchestrator | 2026-03-10 01:00:43.225378 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-10 01:00:43.225383 | orchestrator | Tuesday 10 March 2026 00:58:54 +0000 (0:00:00.421) 0:00:27.967 ********* 2026-03-10 01:00:43.225389 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:00:43.225394 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:00:43.225399 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:00:43.225405 | orchestrator | 2026-03-10 01:00:43.225410 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-10 01:00:43.225415 | orchestrator | Tuesday 10 March 2026 00:58:54 +0000 (0:00:00.347) 0:00:28.315 ********* 2026-03-10 01:00:43.225421 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-10 01:00:43.225426 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-10 01:00:43.225431 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-10 01:00:43.225436 | orchestrator | 2026-03-10 01:00:43.225464 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-10 01:00:43.225474 | orchestrator | Tuesday 10 March 2026 00:58:55 +0000 (0:00:00.531) 0:00:28.847 ********* 2026-03-10 01:00:43.225492 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-10 01:00:43.225505 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-10 01:00:43.225522 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-10 01:00:43.225531 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-10 01:00:43.225539 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-10 01:00:43.225547 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-10 01:00:43.225555 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-10 01:00:43.225563 | orchestrator | 2026-03-10 01:00:43.225572 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-10 01:00:43.225581 | orchestrator | Tuesday 10 March 2026 00:58:56 +0000 (0:00:01.106) 0:00:29.954 ********* 2026-03-10 01:00:43.225589 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-10 01:00:43.225598 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-10 01:00:43.225607 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-10 01:00:43.225616 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-10 01:00:43.225625 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-10 01:00:43.225635 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-10 01:00:43.225645 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-10 01:00:43.225651 | orchestrator | 2026-03-10 01:00:43.225656 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-10 01:00:43.225662 | orchestrator | Tuesday 10 March 2026 00:58:58 +0000 (0:00:02.264) 0:00:32.218 ********* 2026-03-10 01:00:43.225667 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:00:43.225673 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:00:43.225678 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-10 01:00:43.225684 | orchestrator | 2026-03-10 01:00:43.225689 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-10 01:00:43.225695 | orchestrator | Tuesday 10 March 2026 00:58:58 +0000 (0:00:00.387) 0:00:32.606 ********* 2026-03-10 01:00:43.225702 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-10 01:00:43.225710 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-10 01:00:43.225715 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-10 01:00:43.225721 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-10 01:00:43.225727 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-10 01:00:43.225738 | orchestrator | 2026-03-10 01:00:43.225744 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-10 01:00:43.225749 | orchestrator | Tuesday 10 March 2026 00:59:46 +0000 (0:00:47.958) 0:01:20.564 ********* 2026-03-10 01:00:43.225754 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:00:43.225760 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:00:43.225765 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:00:43.225770 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:00:43.225776 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:00:43.225781 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:00:43.225787 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-10 01:00:43.225792 | orchestrator | 2026-03-10 01:00:43.225797 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-10 01:00:43.225803 | orchestrator | Tuesday 10 March 2026 01:00:11 +0000 (0:00:24.545) 0:01:45.110 ********* 2026-03-10 01:00:43.225808 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:00:43.225817 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:00:43.225822 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:00:43.225828 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:00:43.225833 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:00:43.225839 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:00:43.225844 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-10 01:00:43.225849 | orchestrator | 2026-03-10 01:00:43.225855 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-10 01:00:43.225860 | orchestrator | Tuesday 10 March 2026 01:00:24 +0000 (0:00:12.737) 0:01:57.848 ********* 2026-03-10 01:00:43.225865 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:00:43.225871 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-10 01:00:43.225876 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-10 01:00:43.225881 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:00:43.225887 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-10 01:00:43.225895 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-10 01:00:43.225901 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:00:43.225906 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-10 01:00:43.225912 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-10 01:00:43.225917 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:00:43.225923 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-10 01:00:43.225928 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-10 01:00:43.225934 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:00:43.225939 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-10 01:00:43.225944 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-10 01:00:43.225955 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:00:43.225961 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-10 01:00:43.225966 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-10 01:00:43.225972 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-10 01:00:43.225977 | orchestrator | 2026-03-10 01:00:43.225983 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:00:43.225988 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-10 01:00:43.225995 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-10 01:00:43.226001 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-10 01:00:43.226006 | orchestrator | 2026-03-10 01:00:43.226012 | orchestrator | 2026-03-10 01:00:43.226043 | orchestrator | 2026-03-10 01:00:43.226049 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:00:43.226054 | orchestrator | Tuesday 10 March 2026 01:00:42 +0000 (0:00:18.153) 0:02:16.002 ********* 2026-03-10 01:00:43.226060 | orchestrator | =============================================================================== 2026-03-10 01:00:43.226065 | orchestrator | create openstack pool(s) ----------------------------------------------- 47.96s 2026-03-10 01:00:43.226070 | orchestrator | generate keys ---------------------------------------------------------- 24.55s 2026-03-10 01:00:43.226076 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.15s 2026-03-10 01:00:43.226081 | orchestrator | get keys from monitors ------------------------------------------------- 12.74s 2026-03-10 01:00:43.226087 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.26s 2026-03-10 01:00:43.226092 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.22s 2026-03-10 01:00:43.226097 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 2.09s 2026-03-10 01:00:43.226103 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.11s 2026-03-10 01:00:43.226108 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 1.04s 2026-03-10 01:00:43.226113 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.94s 2026-03-10 01:00:43.226119 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.89s 2026-03-10 01:00:43.226124 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.83s 2026-03-10 01:00:43.226130 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.78s 2026-03-10 01:00:43.226138 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.74s 2026-03-10 01:00:43.226143 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.71s 2026-03-10 01:00:43.226149 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.70s 2026-03-10 01:00:43.226155 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.69s 2026-03-10 01:00:43.226160 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.69s 2026-03-10 01:00:43.226165 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.68s 2026-03-10 01:00:43.226171 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.65s 2026-03-10 01:00:43.226176 | orchestrator | 2026-03-10 01:00:43 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 01:00:43.226182 | orchestrator | 2026-03-10 01:00:43 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:00:43.226188 | orchestrator | 2026-03-10 01:00:43 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:46.276722 | orchestrator | 2026-03-10 01:00:46 | INFO  | Task 682ac5b7-0c3a-456f-b646-92ecd8b862c7 is in state STARTED 2026-03-10 01:00:46.277838 | orchestrator | 2026-03-10 01:00:46 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 01:00:46.277876 | orchestrator | 2026-03-10 01:00:46 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:00:46.277890 | orchestrator | 2026-03-10 01:00:46 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:49.320952 | orchestrator | 2026-03-10 01:00:49 | INFO  | Task 682ac5b7-0c3a-456f-b646-92ecd8b862c7 is in state STARTED 2026-03-10 01:00:49.324383 | orchestrator | 2026-03-10 01:00:49 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 01:00:49.326552 | orchestrator | 2026-03-10 01:00:49 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:00:49.326634 | orchestrator | 2026-03-10 01:00:49 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:52.368025 | orchestrator | 2026-03-10 01:00:52 | INFO  | Task 682ac5b7-0c3a-456f-b646-92ecd8b862c7 is in state STARTED 2026-03-10 01:00:52.370856 | orchestrator | 2026-03-10 01:00:52 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 01:00:52.373631 | orchestrator | 2026-03-10 01:00:52 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:00:52.373698 | orchestrator | 2026-03-10 01:00:52 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:55.415346 | orchestrator | 2026-03-10 01:00:55 | INFO  | Task 682ac5b7-0c3a-456f-b646-92ecd8b862c7 is in state STARTED 2026-03-10 01:00:55.415998 | orchestrator | 2026-03-10 01:00:55 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 01:00:55.417490 | orchestrator | 2026-03-10 01:00:55 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:00:55.417566 | orchestrator | 2026-03-10 01:00:55 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:58.467827 | orchestrator | 2026-03-10 01:00:58 | INFO  | Task 682ac5b7-0c3a-456f-b646-92ecd8b862c7 is in state STARTED 2026-03-10 01:00:58.470232 | orchestrator | 2026-03-10 01:00:58 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 01:00:58.472672 | orchestrator | 2026-03-10 01:00:58 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:00:58.472725 | orchestrator | 2026-03-10 01:00:58 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:01.521745 | orchestrator | 2026-03-10 01:01:01 | INFO  | Task 682ac5b7-0c3a-456f-b646-92ecd8b862c7 is in state STARTED 2026-03-10 01:01:01.524251 | orchestrator | 2026-03-10 01:01:01 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 01:01:01.526192 | orchestrator | 2026-03-10 01:01:01 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:01:01.526267 | orchestrator | 2026-03-10 01:01:01 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:04.574673 | orchestrator | 2026-03-10 01:01:04 | INFO  | Task 682ac5b7-0c3a-456f-b646-92ecd8b862c7 is in state STARTED 2026-03-10 01:01:04.576927 | orchestrator | 2026-03-10 01:01:04 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 01:01:04.578472 | orchestrator | 2026-03-10 01:01:04 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:01:04.578518 | orchestrator | 2026-03-10 01:01:04 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:07.639545 | orchestrator | 2026-03-10 01:01:07 | INFO  | Task 682ac5b7-0c3a-456f-b646-92ecd8b862c7 is in state STARTED 2026-03-10 01:01:07.640976 | orchestrator | 2026-03-10 01:01:07 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 01:01:07.644408 | orchestrator | 2026-03-10 01:01:07 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:01:07.644755 | orchestrator | 2026-03-10 01:01:07 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:10.684983 | orchestrator | 2026-03-10 01:01:10 | INFO  | Task 682ac5b7-0c3a-456f-b646-92ecd8b862c7 is in state STARTED 2026-03-10 01:01:10.686414 | orchestrator | 2026-03-10 01:01:10 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 01:01:10.689572 | orchestrator | 2026-03-10 01:01:10 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:01:10.689627 | orchestrator | 2026-03-10 01:01:10 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:13.739368 | orchestrator | 2026-03-10 01:01:13 | INFO  | Task 682ac5b7-0c3a-456f-b646-92ecd8b862c7 is in state STARTED 2026-03-10 01:01:13.740113 | orchestrator | 2026-03-10 01:01:13 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 01:01:13.741387 | orchestrator | 2026-03-10 01:01:13 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:01:13.741427 | orchestrator | 2026-03-10 01:01:13 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:16.785899 | orchestrator | 2026-03-10 01:01:16 | INFO  | Task 682ac5b7-0c3a-456f-b646-92ecd8b862c7 is in state STARTED 2026-03-10 01:01:16.787084 | orchestrator | 2026-03-10 01:01:16 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 01:01:16.788106 | orchestrator | 2026-03-10 01:01:16 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:01:16.788140 | orchestrator | 2026-03-10 01:01:16 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:19.821350 | orchestrator | 2026-03-10 01:01:19 | INFO  | Task 682ac5b7-0c3a-456f-b646-92ecd8b862c7 is in state STARTED 2026-03-10 01:01:19.822704 | orchestrator | 2026-03-10 01:01:19 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 01:01:19.824605 | orchestrator | 2026-03-10 01:01:19 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:01:19.824640 | orchestrator | 2026-03-10 01:01:19 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:22.867621 | orchestrator | 2026-03-10 01:01:22 | INFO  | Task 682ac5b7-0c3a-456f-b646-92ecd8b862c7 is in state STARTED 2026-03-10 01:01:22.869049 | orchestrator | 2026-03-10 01:01:22 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 01:01:22.871678 | orchestrator | 2026-03-10 01:01:22 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:01:22.871726 | orchestrator | 2026-03-10 01:01:22 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:25.920351 | orchestrator | 2026-03-10 01:01:25 | INFO  | Task 682ac5b7-0c3a-456f-b646-92ecd8b862c7 is in state SUCCESS 2026-03-10 01:01:25.921210 | orchestrator | 2026-03-10 01:01:25 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 01:01:25.922934 | orchestrator | 2026-03-10 01:01:25 | INFO  | Task 2fb9fa8f-a050-4ea4-bb84-59033ab57bd6 is in state STARTED 2026-03-10 01:01:25.924676 | orchestrator | 2026-03-10 01:01:25 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:01:25.924735 | orchestrator | 2026-03-10 01:01:25 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:28.981911 | orchestrator | 2026-03-10 01:01:28 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 01:01:28.983619 | orchestrator | 2026-03-10 01:01:28 | INFO  | Task 2fb9fa8f-a050-4ea4-bb84-59033ab57bd6 is in state STARTED 2026-03-10 01:01:28.987359 | orchestrator | 2026-03-10 01:01:28 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:01:28.987624 | orchestrator | 2026-03-10 01:01:28 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:32.025209 | orchestrator | 2026-03-10 01:01:32 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 01:01:32.025536 | orchestrator | 2026-03-10 01:01:32 | INFO  | Task 2fb9fa8f-a050-4ea4-bb84-59033ab57bd6 is in state STARTED 2026-03-10 01:01:32.026990 | orchestrator | 2026-03-10 01:01:32 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:01:32.027272 | orchestrator | 2026-03-10 01:01:32 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:35.068583 | orchestrator | 2026-03-10 01:01:35 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 01:01:35.070173 | orchestrator | 2026-03-10 01:01:35 | INFO  | Task 2fb9fa8f-a050-4ea4-bb84-59033ab57bd6 is in state STARTED 2026-03-10 01:01:35.071944 | orchestrator | 2026-03-10 01:01:35 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:01:35.071983 | orchestrator | 2026-03-10 01:01:35 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:38.116976 | orchestrator | 2026-03-10 01:01:38 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 01:01:38.120645 | orchestrator | 2026-03-10 01:01:38 | INFO  | Task 2fb9fa8f-a050-4ea4-bb84-59033ab57bd6 is in state STARTED 2026-03-10 01:01:38.123667 | orchestrator | 2026-03-10 01:01:38 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:01:38.124090 | orchestrator | 2026-03-10 01:01:38 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:41.199949 | orchestrator | 2026-03-10 01:01:41 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 01:01:41.202686 | orchestrator | 2026-03-10 01:01:41 | INFO  | Task 2fb9fa8f-a050-4ea4-bb84-59033ab57bd6 is in state STARTED 2026-03-10 01:01:41.203110 | orchestrator | 2026-03-10 01:01:41 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:01:41.203324 | orchestrator | 2026-03-10 01:01:41 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:44.246426 | orchestrator | 2026-03-10 01:01:44 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 01:01:44.250540 | orchestrator | 2026-03-10 01:01:44 | INFO  | Task 2fb9fa8f-a050-4ea4-bb84-59033ab57bd6 is in state STARTED 2026-03-10 01:01:44.251966 | orchestrator | 2026-03-10 01:01:44 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:01:44.253737 | orchestrator | 2026-03-10 01:01:44 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:47.297270 | orchestrator | 2026-03-10 01:01:47 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state STARTED 2026-03-10 01:01:47.301767 | orchestrator | 2026-03-10 01:01:47 | INFO  | Task 2fb9fa8f-a050-4ea4-bb84-59033ab57bd6 is in state STARTED 2026-03-10 01:01:47.303885 | orchestrator | 2026-03-10 01:01:47 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:01:47.303936 | orchestrator | 2026-03-10 01:01:47 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:50.365358 | orchestrator | 2026-03-10 01:01:50 | INFO  | Task 67337d9a-68bc-47e7-b929-a649aca5b2cc is in state SUCCESS 2026-03-10 01:01:50.366680 | orchestrator | 2026-03-10 01:01:50.366877 | orchestrator | 2026-03-10 01:01:50.366901 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-10 01:01:50.366921 | orchestrator | 2026-03-10 01:01:50.366945 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-10 01:01:50.366971 | orchestrator | Tuesday 10 March 2026 01:00:47 +0000 (0:00:00.178) 0:00:00.178 ********* 2026-03-10 01:01:50.366989 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-10 01:01:50.367009 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-10 01:01:50.367026 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-10 01:01:50.367044 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-10 01:01:50.367060 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-10 01:01:50.367076 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-10 01:01:50.367095 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-10 01:01:50.367112 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-10 01:01:50.367131 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-10 01:01:50.367147 | orchestrator | 2026-03-10 01:01:50.367165 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-10 01:01:50.367184 | orchestrator | Tuesday 10 March 2026 01:00:53 +0000 (0:00:05.752) 0:00:05.931 ********* 2026-03-10 01:01:50.367201 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-10 01:01:50.367219 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-10 01:01:50.367257 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-10 01:01:50.367278 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-10 01:01:50.367297 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-10 01:01:50.367317 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-10 01:01:50.367335 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-10 01:01:50.367353 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-10 01:01:50.367366 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-10 01:01:50.367405 | orchestrator | 2026-03-10 01:01:50.367430 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-10 01:01:50.367477 | orchestrator | Tuesday 10 March 2026 01:00:57 +0000 (0:00:04.490) 0:00:10.422 ********* 2026-03-10 01:01:50.367498 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-10 01:01:50.367511 | orchestrator | 2026-03-10 01:01:50.367524 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-10 01:01:50.367537 | orchestrator | Tuesday 10 March 2026 01:00:59 +0000 (0:00:01.080) 0:00:11.502 ********* 2026-03-10 01:01:50.367550 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-10 01:01:50.367563 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-10 01:01:50.367575 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-10 01:01:50.367605 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-10 01:01:50.367616 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-10 01:01:50.367627 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-10 01:01:50.367638 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-10 01:01:50.367648 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-10 01:01:50.367659 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-10 01:01:50.367669 | orchestrator | 2026-03-10 01:01:50.367680 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-10 01:01:50.367690 | orchestrator | Tuesday 10 March 2026 01:01:13 +0000 (0:00:14.513) 0:00:26.016 ********* 2026-03-10 01:01:50.367701 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-10 01:01:50.367712 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-10 01:01:50.367724 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-10 01:01:50.367735 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-10 01:01:50.367763 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-10 01:01:50.367775 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-10 01:01:50.367785 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-10 01:01:50.367796 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-10 01:01:50.367806 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-10 01:01:50.367817 | orchestrator | 2026-03-10 01:01:50.367828 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-10 01:01:50.367838 | orchestrator | Tuesday 10 March 2026 01:01:16 +0000 (0:00:03.346) 0:00:29.363 ********* 2026-03-10 01:01:50.367850 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-10 01:01:50.367861 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-10 01:01:50.367871 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-10 01:01:50.367882 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-10 01:01:50.367893 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-10 01:01:50.367903 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-10 01:01:50.367914 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-10 01:01:50.367925 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-10 01:01:50.367935 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-10 01:01:50.367946 | orchestrator | 2026-03-10 01:01:50.367956 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:01:50.367967 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:01:50.367979 | orchestrator | 2026-03-10 01:01:50.367990 | orchestrator | 2026-03-10 01:01:50.368008 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:01:50.368019 | orchestrator | Tuesday 10 March 2026 01:01:24 +0000 (0:00:07.426) 0:00:36.789 ********* 2026-03-10 01:01:50.368030 | orchestrator | =============================================================================== 2026-03-10 01:01:50.368040 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.51s 2026-03-10 01:01:50.368058 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.43s 2026-03-10 01:01:50.368069 | orchestrator | Check if ceph keys exist ------------------------------------------------ 5.75s 2026-03-10 01:01:50.368080 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.49s 2026-03-10 01:01:50.368091 | orchestrator | Check if target directories exist --------------------------------------- 3.35s 2026-03-10 01:01:50.368102 | orchestrator | Create share directory -------------------------------------------------- 1.08s 2026-03-10 01:01:50.368112 | orchestrator | 2026-03-10 01:01:50.368123 | orchestrator | 2026-03-10 01:01:50.368134 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 01:01:50.368144 | orchestrator | 2026-03-10 01:01:50.368155 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 01:01:50.368166 | orchestrator | Tuesday 10 March 2026 00:59:51 +0000 (0:00:00.312) 0:00:00.312 ********* 2026-03-10 01:01:50.368176 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:50.368187 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:50.368198 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:50.368209 | orchestrator | 2026-03-10 01:01:50.368219 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 01:01:50.368230 | orchestrator | Tuesday 10 March 2026 00:59:51 +0000 (0:00:00.340) 0:00:00.652 ********* 2026-03-10 01:01:50.368240 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-10 01:01:50.368251 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-10 01:01:50.368262 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-10 01:01:50.368273 | orchestrator | 2026-03-10 01:01:50.368283 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-10 01:01:50.368294 | orchestrator | 2026-03-10 01:01:50.368305 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-10 01:01:50.368315 | orchestrator | Tuesday 10 March 2026 00:59:51 +0000 (0:00:00.464) 0:00:01.117 ********* 2026-03-10 01:01:50.368326 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:01:50.368337 | orchestrator | 2026-03-10 01:01:50.368347 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-10 01:01:50.368358 | orchestrator | Tuesday 10 March 2026 00:59:52 +0000 (0:00:00.571) 0:00:01.689 ********* 2026-03-10 01:01:50.368388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-10 01:01:50.368420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-10 01:01:50.368477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-10 01:01:50.368508 | orchestrator | 2026-03-10 01:01:50.368526 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-10 01:01:50.368543 | orchestrator | Tuesday 10 March 2026 00:59:53 +0000 (0:00:01.250) 0:00:02.939 ********* 2026-03-10 01:01:50.368568 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:50.368589 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:50.368607 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:50.368625 | orchestrator | 2026-03-10 01:01:50.368642 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-10 01:01:50.368662 | orchestrator | Tuesday 10 March 2026 00:59:54 +0000 (0:00:00.544) 0:00:03.483 ********* 2026-03-10 01:01:50.368680 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-10 01:01:50.368698 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-10 01:01:50.368717 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-10 01:01:50.368735 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-10 01:01:50.368753 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-10 01:01:50.368771 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-10 01:01:50.368784 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-10 01:01:50.368795 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-10 01:01:50.368806 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-10 01:01:50.368816 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-10 01:01:50.368827 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-10 01:01:50.368837 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-10 01:01:50.368848 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-10 01:01:50.368858 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-10 01:01:50.368869 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-10 01:01:50.368880 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-10 01:01:50.368891 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-10 01:01:50.368901 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-10 01:01:50.368912 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-10 01:01:50.368923 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-10 01:01:50.368934 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-10 01:01:50.368953 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-10 01:01:50.368975 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-10 01:01:50.368986 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-10 01:01:50.368999 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-10 01:01:50.369011 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-10 01:01:50.369022 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-10 01:01:50.369033 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-10 01:01:50.369044 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-10 01:01:50.369055 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-10 01:01:50.369065 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-10 01:01:50.369076 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-10 01:01:50.369093 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-10 01:01:50.369104 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-10 01:01:50.369115 | orchestrator | 2026-03-10 01:01:50.369126 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-10 01:01:50.369137 | orchestrator | Tuesday 10 March 2026 00:59:55 +0000 (0:00:00.857) 0:00:04.340 ********* 2026-03-10 01:01:50.369148 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:50.369159 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:50.369169 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:50.369180 | orchestrator | 2026-03-10 01:01:50.369191 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-10 01:01:50.369202 | orchestrator | Tuesday 10 March 2026 00:59:55 +0000 (0:00:00.369) 0:00:04.710 ********* 2026-03-10 01:01:50.369212 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:50.369223 | orchestrator | 2026-03-10 01:01:50.369234 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-10 01:01:50.369245 | orchestrator | Tuesday 10 March 2026 00:59:55 +0000 (0:00:00.179) 0:00:04.889 ********* 2026-03-10 01:01:50.369257 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:50.369274 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:50.369291 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:50.369302 | orchestrator | 2026-03-10 01:01:50.369313 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-10 01:01:50.369324 | orchestrator | Tuesday 10 March 2026 00:59:56 +0000 (0:00:00.624) 0:00:05.514 ********* 2026-03-10 01:01:50.369334 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:50.369345 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:50.369356 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:50.369366 | orchestrator | 2026-03-10 01:01:50.369377 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-10 01:01:50.369388 | orchestrator | Tuesday 10 March 2026 00:59:56 +0000 (0:00:00.459) 0:00:05.974 ********* 2026-03-10 01:01:50.369405 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:50.369416 | orchestrator | 2026-03-10 01:01:50.369426 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-10 01:01:50.369437 | orchestrator | Tuesday 10 March 2026 00:59:56 +0000 (0:00:00.193) 0:00:06.168 ********* 2026-03-10 01:01:50.369574 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:50.369588 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:50.369599 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:50.369610 | orchestrator | 2026-03-10 01:01:50.369621 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-10 01:01:50.369631 | orchestrator | Tuesday 10 March 2026 00:59:57 +0000 (0:00:00.432) 0:00:06.600 ********* 2026-03-10 01:01:50.369642 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:50.369653 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:50.369664 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:50.369674 | orchestrator | 2026-03-10 01:01:50.369685 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-10 01:01:50.369697 | orchestrator | Tuesday 10 March 2026 00:59:57 +0000 (0:00:00.363) 0:00:06.964 ********* 2026-03-10 01:01:50.369716 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:50.369734 | orchestrator | 2026-03-10 01:01:50.369752 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-10 01:01:50.369770 | orchestrator | Tuesday 10 March 2026 00:59:58 +0000 (0:00:00.459) 0:00:07.423 ********* 2026-03-10 01:01:50.369789 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:50.369806 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:50.369824 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:50.369843 | orchestrator | 2026-03-10 01:01:50.369871 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-10 01:01:50.369892 | orchestrator | Tuesday 10 March 2026 00:59:58 +0000 (0:00:00.352) 0:00:07.775 ********* 2026-03-10 01:01:50.369911 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:50.369931 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:50.369943 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:50.369953 | orchestrator | 2026-03-10 01:01:50.369964 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-10 01:01:50.369975 | orchestrator | Tuesday 10 March 2026 00:59:58 +0000 (0:00:00.377) 0:00:08.153 ********* 2026-03-10 01:01:50.369986 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:50.369996 | orchestrator | 2026-03-10 01:01:50.370007 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-10 01:01:50.370070 | orchestrator | Tuesday 10 March 2026 00:59:59 +0000 (0:00:00.142) 0:00:08.295 ********* 2026-03-10 01:01:50.370081 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:50.370091 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:50.370101 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:50.370111 | orchestrator | 2026-03-10 01:01:50.370120 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-10 01:01:50.370130 | orchestrator | Tuesday 10 March 2026 00:59:59 +0000 (0:00:00.351) 0:00:08.647 ********* 2026-03-10 01:01:50.370140 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:50.370150 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:50.370159 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:50.370169 | orchestrator | 2026-03-10 01:01:50.370178 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-10 01:01:50.370188 | orchestrator | Tuesday 10 March 2026 00:59:59 +0000 (0:00:00.581) 0:00:09.229 ********* 2026-03-10 01:01:50.370197 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:50.370207 | orchestrator | 2026-03-10 01:01:50.370216 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-10 01:01:50.370226 | orchestrator | Tuesday 10 March 2026 01:00:00 +0000 (0:00:00.128) 0:00:09.358 ********* 2026-03-10 01:01:50.370236 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:50.370246 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:50.370255 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:50.370274 | orchestrator | 2026-03-10 01:01:50.370284 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-10 01:01:50.370294 | orchestrator | Tuesday 10 March 2026 01:00:00 +0000 (0:00:00.355) 0:00:09.714 ********* 2026-03-10 01:01:50.370310 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:50.370320 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:50.370330 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:50.370339 | orchestrator | 2026-03-10 01:01:50.370349 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-10 01:01:50.370359 | orchestrator | Tuesday 10 March 2026 01:00:00 +0000 (0:00:00.378) 0:00:10.092 ********* 2026-03-10 01:01:50.370368 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:50.370378 | orchestrator | 2026-03-10 01:01:50.370388 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-10 01:01:50.370398 | orchestrator | Tuesday 10 March 2026 01:00:00 +0000 (0:00:00.130) 0:00:10.223 ********* 2026-03-10 01:01:50.370408 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:50.370417 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:50.370427 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:50.370437 | orchestrator | 2026-03-10 01:01:50.370487 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-10 01:01:50.370498 | orchestrator | Tuesday 10 March 2026 01:00:01 +0000 (0:00:00.327) 0:00:10.551 ********* 2026-03-10 01:01:50.370508 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:50.370518 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:50.370527 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:50.370537 | orchestrator | 2026-03-10 01:01:50.370547 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-10 01:01:50.370556 | orchestrator | Tuesday 10 March 2026 01:00:01 +0000 (0:00:00.609) 0:00:11.160 ********* 2026-03-10 01:01:50.370566 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:50.370576 | orchestrator | 2026-03-10 01:01:50.370586 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-10 01:01:50.370595 | orchestrator | Tuesday 10 March 2026 01:00:02 +0000 (0:00:00.169) 0:00:11.330 ********* 2026-03-10 01:01:50.370605 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:50.370615 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:50.370625 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:50.370634 | orchestrator | 2026-03-10 01:01:50.370644 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-10 01:01:50.370654 | orchestrator | Tuesday 10 March 2026 01:00:02 +0000 (0:00:00.410) 0:00:11.740 ********* 2026-03-10 01:01:50.370664 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:50.370673 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:50.370683 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:50.370693 | orchestrator | 2026-03-10 01:01:50.370702 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-10 01:01:50.370712 | orchestrator | Tuesday 10 March 2026 01:00:02 +0000 (0:00:00.337) 0:00:12.078 ********* 2026-03-10 01:01:50.370722 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:50.370731 | orchestrator | 2026-03-10 01:01:50.370741 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-10 01:01:50.370750 | orchestrator | Tuesday 10 March 2026 01:00:02 +0000 (0:00:00.133) 0:00:12.211 ********* 2026-03-10 01:01:50.370760 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:50.370770 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:50.370780 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:50.370789 | orchestrator | 2026-03-10 01:01:50.370799 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-10 01:01:50.370808 | orchestrator | Tuesday 10 March 2026 01:00:03 +0000 (0:00:00.474) 0:00:12.685 ********* 2026-03-10 01:01:50.370818 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:50.370828 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:50.370838 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:50.370861 | orchestrator | 2026-03-10 01:01:50.370871 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-10 01:01:50.370881 | orchestrator | Tuesday 10 March 2026 01:00:03 +0000 (0:00:00.291) 0:00:12.977 ********* 2026-03-10 01:01:50.370898 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:50.370908 | orchestrator | 2026-03-10 01:01:50.370918 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-10 01:01:50.370927 | orchestrator | Tuesday 10 March 2026 01:00:03 +0000 (0:00:00.145) 0:00:13.122 ********* 2026-03-10 01:01:50.370937 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:50.370947 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:50.370957 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:50.370966 | orchestrator | 2026-03-10 01:01:50.370976 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-10 01:01:50.370985 | orchestrator | Tuesday 10 March 2026 01:00:04 +0000 (0:00:00.290) 0:00:13.413 ********* 2026-03-10 01:01:50.370995 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:50.371005 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:50.371015 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:50.371025 | orchestrator | 2026-03-10 01:01:50.371034 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-10 01:01:50.371044 | orchestrator | Tuesday 10 March 2026 01:00:04 +0000 (0:00:00.325) 0:00:13.739 ********* 2026-03-10 01:01:50.371053 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:50.371063 | orchestrator | 2026-03-10 01:01:50.371073 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-10 01:01:50.371087 | orchestrator | Tuesday 10 March 2026 01:00:04 +0000 (0:00:00.117) 0:00:13.857 ********* 2026-03-10 01:01:50.371105 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:50.371122 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:50.371140 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:50.371157 | orchestrator | 2026-03-10 01:01:50.371172 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-10 01:01:50.371188 | orchestrator | Tuesday 10 March 2026 01:00:05 +0000 (0:00:00.423) 0:00:14.280 ********* 2026-03-10 01:01:50.371206 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:01:50.371224 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:01:50.371242 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:01:50.371260 | orchestrator | 2026-03-10 01:01:50.371277 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-10 01:01:50.371287 | orchestrator | Tuesday 10 March 2026 01:00:06 +0000 (0:00:01.951) 0:00:16.232 ********* 2026-03-10 01:01:50.371297 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-10 01:01:50.371314 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-10 01:01:50.371324 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-10 01:01:50.371333 | orchestrator | 2026-03-10 01:01:50.371343 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-10 01:01:50.371352 | orchestrator | Tuesday 10 March 2026 01:00:09 +0000 (0:00:02.076) 0:00:18.309 ********* 2026-03-10 01:01:50.371362 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-10 01:01:50.371373 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-10 01:01:50.371382 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-10 01:01:50.371392 | orchestrator | 2026-03-10 01:01:50.371402 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-10 01:01:50.371411 | orchestrator | Tuesday 10 March 2026 01:00:11 +0000 (0:00:02.688) 0:00:20.998 ********* 2026-03-10 01:01:50.371421 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-10 01:01:50.371431 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-10 01:01:50.371478 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-10 01:01:50.371490 | orchestrator | 2026-03-10 01:01:50.371499 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-10 01:01:50.371509 | orchestrator | Tuesday 10 March 2026 01:00:13 +0000 (0:00:02.144) 0:00:23.143 ********* 2026-03-10 01:01:50.371519 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:50.371528 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:50.371538 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:50.371547 | orchestrator | 2026-03-10 01:01:50.371556 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-10 01:01:50.371566 | orchestrator | Tuesday 10 March 2026 01:00:14 +0000 (0:00:00.311) 0:00:23.454 ********* 2026-03-10 01:01:50.371575 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:50.371585 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:50.371594 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:50.371604 | orchestrator | 2026-03-10 01:01:50.371613 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-10 01:01:50.371623 | orchestrator | Tuesday 10 March 2026 01:00:14 +0000 (0:00:00.428) 0:00:23.883 ********* 2026-03-10 01:01:50.371632 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:01:50.371642 | orchestrator | 2026-03-10 01:01:50.371651 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-10 01:01:50.371661 | orchestrator | Tuesday 10 March 2026 01:00:15 +0000 (0:00:00.878) 0:00:24.761 ********* 2026-03-10 01:01:50.371692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-10 01:01:50.371706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-10 01:01:50.371746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-10 01:01:50.371766 | orchestrator | 2026-03-10 01:01:50.371776 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-10 01:01:50.371786 | orchestrator | Tuesday 10 March 2026 01:00:17 +0000 (0:00:01.791) 0:00:26.552 ********* 2026-03-10 01:01:50.371804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-10 01:01:50.371816 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:50.371833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-10 01:01:50.371850 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:50.371869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-10 01:01:50.371880 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:50.371889 | orchestrator | 2026-03-10 01:01:50.371899 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-10 01:01:50.371909 | orchestrator | Tuesday 10 March 2026 01:00:18 +0000 (0:00:00.915) 0:00:27.468 ********* 2026-03-10 01:01:50.371925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-10 01:01:50.371942 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:50.371960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-10 01:01:50.371972 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:50.371989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-10 01:01:50.372007 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:50.372016 | orchestrator | 2026-03-10 01:01:50.372026 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-03-10 01:01:50.372036 | orchestrator | Tuesday 10 March 2026 01:00:19 +0000 (0:00:01.126) 0:00:28.594 ********* 2026-03-10 01:01:50.372054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-10 01:01:50.372077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-10 01:01:50.372106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-10 01:01:50.372124 | orchestrator | 2026-03-10 01:01:50.372134 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-10 01:01:50.372144 | orchestrator | Tuesday 10 March 2026 01:00:21 +0000 (0:00:02.038) 0:00:30.633 ********* 2026-03-10 01:01:50.372154 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:50.372164 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:50.372174 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:50.372183 | orchestrator | 2026-03-10 01:01:50.372193 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-10 01:01:50.372203 | orchestrator | Tuesday 10 March 2026 01:00:21 +0000 (0:00:00.302) 0:00:30.935 ********* 2026-03-10 01:01:50.372212 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:01:50.372222 | orchestrator | 2026-03-10 01:01:50.372236 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-10 01:01:50.372253 | orchestrator | Tuesday 10 March 2026 01:00:22 +0000 (0:00:00.601) 0:00:31.536 ********* 2026-03-10 01:01:50.372270 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:01:50.372287 | orchestrator | 2026-03-10 01:01:50.372302 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-10 01:01:50.372318 | orchestrator | Tuesday 10 March 2026 01:00:25 +0000 (0:00:02.820) 0:00:34.357 ********* 2026-03-10 01:01:50.372333 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:01:50.372349 | orchestrator | 2026-03-10 01:01:50.372366 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-10 01:01:50.372383 | orchestrator | Tuesday 10 March 2026 01:00:28 +0000 (0:00:03.312) 0:00:37.670 ********* 2026-03-10 01:01:50.372400 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:01:50.372416 | orchestrator | 2026-03-10 01:01:50.372431 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-10 01:01:50.372459 | orchestrator | Tuesday 10 March 2026 01:00:45 +0000 (0:00:17.257) 0:00:54.927 ********* 2026-03-10 01:01:50.372470 | orchestrator | 2026-03-10 01:01:50.372479 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-10 01:01:50.372489 | orchestrator | Tuesday 10 March 2026 01:00:45 +0000 (0:00:00.079) 0:00:55.007 ********* 2026-03-10 01:01:50.372499 | orchestrator | 2026-03-10 01:01:50.372508 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-10 01:01:50.372518 | orchestrator | Tuesday 10 March 2026 01:00:45 +0000 (0:00:00.084) 0:00:55.091 ********* 2026-03-10 01:01:50.372527 | orchestrator | 2026-03-10 01:01:50.372537 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-10 01:01:50.372547 | orchestrator | Tuesday 10 March 2026 01:00:45 +0000 (0:00:00.086) 0:00:55.178 ********* 2026-03-10 01:01:50.372557 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:01:50.372566 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:01:50.372576 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:01:50.372586 | orchestrator | 2026-03-10 01:01:50.372596 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:01:50.372606 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-10 01:01:50.372624 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-10 01:01:50.372644 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-10 01:01:50.372653 | orchestrator | 2026-03-10 01:01:50.372663 | orchestrator | 2026-03-10 01:01:50.372673 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:01:50.372682 | orchestrator | Tuesday 10 March 2026 01:01:47 +0000 (0:01:01.399) 0:01:56.577 ********* 2026-03-10 01:01:50.372692 | orchestrator | =============================================================================== 2026-03-10 01:01:50.372702 | orchestrator | horizon : Restart horizon container ------------------------------------ 61.40s 2026-03-10 01:01:50.372711 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 17.26s 2026-03-10 01:01:50.372721 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 3.31s 2026-03-10 01:01:50.372731 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.82s 2026-03-10 01:01:50.372740 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.69s 2026-03-10 01:01:50.372750 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.14s 2026-03-10 01:01:50.372759 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.08s 2026-03-10 01:01:50.372769 | orchestrator | horizon : Deploy horizon container -------------------------------------- 2.04s 2026-03-10 01:01:50.372778 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.95s 2026-03-10 01:01:50.372788 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.79s 2026-03-10 01:01:50.372798 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.25s 2026-03-10 01:01:50.372808 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.13s 2026-03-10 01:01:50.372817 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.92s 2026-03-10 01:01:50.372827 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.88s 2026-03-10 01:01:50.372843 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.86s 2026-03-10 01:01:50.372852 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.62s 2026-03-10 01:01:50.372862 | orchestrator | horizon : Update policy file name --------------------------------------- 0.61s 2026-03-10 01:01:50.372872 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.60s 2026-03-10 01:01:50.372882 | orchestrator | horizon : Update policy file name --------------------------------------- 0.58s 2026-03-10 01:01:50.372891 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.57s 2026-03-10 01:01:50.372901 | orchestrator | 2026-03-10 01:01:50 | INFO  | Task 2fb9fa8f-a050-4ea4-bb84-59033ab57bd6 is in state STARTED 2026-03-10 01:01:50.372911 | orchestrator | 2026-03-10 01:01:50 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:01:50.372921 | orchestrator | 2026-03-10 01:01:50 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:53.405894 | orchestrator | 2026-03-10 01:01:53 | INFO  | Task 2fb9fa8f-a050-4ea4-bb84-59033ab57bd6 is in state STARTED 2026-03-10 01:01:53.407100 | orchestrator | 2026-03-10 01:01:53 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:01:53.407361 | orchestrator | 2026-03-10 01:01:53 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:56.450098 | orchestrator | 2026-03-10 01:01:56 | INFO  | Task 2fb9fa8f-a050-4ea4-bb84-59033ab57bd6 is in state STARTED 2026-03-10 01:01:56.451299 | orchestrator | 2026-03-10 01:01:56 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:01:56.451361 | orchestrator | 2026-03-10 01:01:56 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:59.488325 | orchestrator | 2026-03-10 01:01:59 | INFO  | Task 2fb9fa8f-a050-4ea4-bb84-59033ab57bd6 is in state STARTED 2026-03-10 01:01:59.489326 | orchestrator | 2026-03-10 01:01:59 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:01:59.489348 | orchestrator | 2026-03-10 01:01:59 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:02.533937 | orchestrator | 2026-03-10 01:02:02 | INFO  | Task 2fb9fa8f-a050-4ea4-bb84-59033ab57bd6 is in state STARTED 2026-03-10 01:02:02.534100 | orchestrator | 2026-03-10 01:02:02 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:02:02.534119 | orchestrator | 2026-03-10 01:02:02 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:05.582525 | orchestrator | 2026-03-10 01:02:05 | INFO  | Task 2fb9fa8f-a050-4ea4-bb84-59033ab57bd6 is in state STARTED 2026-03-10 01:02:05.582920 | orchestrator | 2026-03-10 01:02:05 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:02:05.582954 | orchestrator | 2026-03-10 01:02:05 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:08.633095 | orchestrator | 2026-03-10 01:02:08 | INFO  | Task 2fb9fa8f-a050-4ea4-bb84-59033ab57bd6 is in state STARTED 2026-03-10 01:02:08.634330 | orchestrator | 2026-03-10 01:02:08 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:02:08.634376 | orchestrator | 2026-03-10 01:02:08 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:11.681265 | orchestrator | 2026-03-10 01:02:11 | INFO  | Task 2fb9fa8f-a050-4ea4-bb84-59033ab57bd6 is in state STARTED 2026-03-10 01:02:11.681721 | orchestrator | 2026-03-10 01:02:11 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:02:11.682299 | orchestrator | 2026-03-10 01:02:11 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:14.727659 | orchestrator | 2026-03-10 01:02:14 | INFO  | Task 2fb9fa8f-a050-4ea4-bb84-59033ab57bd6 is in state STARTED 2026-03-10 01:02:14.730271 | orchestrator | 2026-03-10 01:02:14 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:02:14.730337 | orchestrator | 2026-03-10 01:02:14 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:17.786499 | orchestrator | 2026-03-10 01:02:17 | INFO  | Task 2fb9fa8f-a050-4ea4-bb84-59033ab57bd6 is in state STARTED 2026-03-10 01:02:17.787387 | orchestrator | 2026-03-10 01:02:17 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:02:17.787429 | orchestrator | 2026-03-10 01:02:17 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:20.834240 | orchestrator | 2026-03-10 01:02:20 | INFO  | Task 2fb9fa8f-a050-4ea4-bb84-59033ab57bd6 is in state STARTED 2026-03-10 01:02:20.835821 | orchestrator | 2026-03-10 01:02:20 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:02:20.835886 | orchestrator | 2026-03-10 01:02:20 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:23.884868 | orchestrator | 2026-03-10 01:02:23 | INFO  | Task 2fb9fa8f-a050-4ea4-bb84-59033ab57bd6 is in state STARTED 2026-03-10 01:02:23.885906 | orchestrator | 2026-03-10 01:02:23 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:02:23.886652 | orchestrator | 2026-03-10 01:02:23 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:26.954169 | orchestrator | 2026-03-10 01:02:26 | INFO  | Task c102f140-edab-4aa4-a9b1-f37054e0223e is in state STARTED 2026-03-10 01:02:26.955086 | orchestrator | 2026-03-10 01:02:26 | INFO  | Task bf7f5ea8-f079-41f3-a337-52813178ba3b is in state STARTED 2026-03-10 01:02:26.959314 | orchestrator | 2026-03-10 01:02:26 | INFO  | Task 2fb9fa8f-a050-4ea4-bb84-59033ab57bd6 is in state SUCCESS 2026-03-10 01:02:26.960943 | orchestrator | 2026-03-10 01:02:26 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:02:26.962851 | orchestrator | 2026-03-10 01:02:26 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:02:26.963119 | orchestrator | 2026-03-10 01:02:26 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:30.017530 | orchestrator | 2026-03-10 01:02:30 | INFO  | Task c102f140-edab-4aa4-a9b1-f37054e0223e is in state STARTED 2026-03-10 01:02:30.022618 | orchestrator | 2026-03-10 01:02:30 | INFO  | Task bf7f5ea8-f079-41f3-a337-52813178ba3b is in state STARTED 2026-03-10 01:02:30.025905 | orchestrator | 2026-03-10 01:02:30 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:02:30.028097 | orchestrator | 2026-03-10 01:02:30 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:02:30.028370 | orchestrator | 2026-03-10 01:02:30 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:33.076369 | orchestrator | 2026-03-10 01:02:33 | INFO  | Task c102f140-edab-4aa4-a9b1-f37054e0223e is in state STARTED 2026-03-10 01:02:33.076502 | orchestrator | 2026-03-10 01:02:33 | INFO  | Task bf7f5ea8-f079-41f3-a337-52813178ba3b is in state SUCCESS 2026-03-10 01:02:33.077234 | orchestrator | 2026-03-10 01:02:33 | INFO  | Task 412ccaa6-306d-4043-918f-ad77e192b33d is in state STARTED 2026-03-10 01:02:33.077645 | orchestrator | 2026-03-10 01:02:33 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:02:33.078857 | orchestrator | 2026-03-10 01:02:33 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:02:33.081208 | orchestrator | 2026-03-10 01:02:33 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:02:33.081253 | orchestrator | 2026-03-10 01:02:33 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:36.117106 | orchestrator | 2026-03-10 01:02:36 | INFO  | Task c102f140-edab-4aa4-a9b1-f37054e0223e is in state STARTED 2026-03-10 01:02:36.117219 | orchestrator | 2026-03-10 01:02:36 | INFO  | Task 412ccaa6-306d-4043-918f-ad77e192b33d is in state STARTED 2026-03-10 01:02:36.117741 | orchestrator | 2026-03-10 01:02:36 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:02:36.118206 | orchestrator | 2026-03-10 01:02:36 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:02:36.118800 | orchestrator | 2026-03-10 01:02:36 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:02:36.118829 | orchestrator | 2026-03-10 01:02:36 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:39.152961 | orchestrator | 2026-03-10 01:02:39 | INFO  | Task c102f140-edab-4aa4-a9b1-f37054e0223e is in state STARTED 2026-03-10 01:02:39.153046 | orchestrator | 2026-03-10 01:02:39 | INFO  | Task 412ccaa6-306d-4043-918f-ad77e192b33d is in state STARTED 2026-03-10 01:02:39.153061 | orchestrator | 2026-03-10 01:02:39 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:02:39.153072 | orchestrator | 2026-03-10 01:02:39 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:02:39.153083 | orchestrator | 2026-03-10 01:02:39 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:02:39.153094 | orchestrator | 2026-03-10 01:02:39 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:42.190411 | orchestrator | 2026-03-10 01:02:42 | INFO  | Task c102f140-edab-4aa4-a9b1-f37054e0223e is in state STARTED 2026-03-10 01:02:42.191760 | orchestrator | 2026-03-10 01:02:42 | INFO  | Task 412ccaa6-306d-4043-918f-ad77e192b33d is in state STARTED 2026-03-10 01:02:42.193704 | orchestrator | 2026-03-10 01:02:42 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:02:42.195386 | orchestrator | 2026-03-10 01:02:42 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:02:42.196601 | orchestrator | 2026-03-10 01:02:42 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:02:42.196859 | orchestrator | 2026-03-10 01:02:42 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:45.266844 | orchestrator | 2026-03-10 01:02:45 | INFO  | Task c102f140-edab-4aa4-a9b1-f37054e0223e is in state STARTED 2026-03-10 01:02:45.267885 | orchestrator | 2026-03-10 01:02:45 | INFO  | Task 412ccaa6-306d-4043-918f-ad77e192b33d is in state STARTED 2026-03-10 01:02:45.268303 | orchestrator | 2026-03-10 01:02:45 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:02:45.268779 | orchestrator | 2026-03-10 01:02:45 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:02:45.269975 | orchestrator | 2026-03-10 01:02:45 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:02:45.270005 | orchestrator | 2026-03-10 01:02:45 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:48.309518 | orchestrator | 2026-03-10 01:02:48 | INFO  | Task c102f140-edab-4aa4-a9b1-f37054e0223e is in state STARTED 2026-03-10 01:02:48.311934 | orchestrator | 2026-03-10 01:02:48 | INFO  | Task 412ccaa6-306d-4043-918f-ad77e192b33d is in state STARTED 2026-03-10 01:02:48.314280 | orchestrator | 2026-03-10 01:02:48 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:02:48.316544 | orchestrator | 2026-03-10 01:02:48 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:02:48.318072 | orchestrator | 2026-03-10 01:02:48 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:02:48.318108 | orchestrator | 2026-03-10 01:02:48 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:51.373890 | orchestrator | 2026-03-10 01:02:51 | INFO  | Task c102f140-edab-4aa4-a9b1-f37054e0223e is in state STARTED 2026-03-10 01:02:51.374676 | orchestrator | 2026-03-10 01:02:51 | INFO  | Task 412ccaa6-306d-4043-918f-ad77e192b33d is in state STARTED 2026-03-10 01:02:51.375665 | orchestrator | 2026-03-10 01:02:51 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:02:51.378042 | orchestrator | 2026-03-10 01:02:51 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:02:51.379795 | orchestrator | 2026-03-10 01:02:51 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:02:51.379828 | orchestrator | 2026-03-10 01:02:51 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:54.433881 | orchestrator | 2026-03-10 01:02:54 | INFO  | Task c102f140-edab-4aa4-a9b1-f37054e0223e is in state STARTED 2026-03-10 01:02:54.434203 | orchestrator | 2026-03-10 01:02:54 | INFO  | Task 412ccaa6-306d-4043-918f-ad77e192b33d is in state STARTED 2026-03-10 01:02:54.435224 | orchestrator | 2026-03-10 01:02:54 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:02:54.437410 | orchestrator | 2026-03-10 01:02:54 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:02:54.438193 | orchestrator | 2026-03-10 01:02:54 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:02:54.438354 | orchestrator | 2026-03-10 01:02:54 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:57.501104 | orchestrator | 2026-03-10 01:02:57 | INFO  | Task c102f140-edab-4aa4-a9b1-f37054e0223e is in state STARTED 2026-03-10 01:02:57.503089 | orchestrator | 2026-03-10 01:02:57 | INFO  | Task 412ccaa6-306d-4043-918f-ad77e192b33d is in state STARTED 2026-03-10 01:02:57.504358 | orchestrator | 2026-03-10 01:02:57 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:02:57.505855 | orchestrator | 2026-03-10 01:02:57 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:02:57.506501 | orchestrator | 2026-03-10 01:02:57 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state STARTED 2026-03-10 01:02:57.506527 | orchestrator | 2026-03-10 01:02:57 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:00.567998 | orchestrator | 2026-03-10 01:03:00 | INFO  | Task c102f140-edab-4aa4-a9b1-f37054e0223e is in state STARTED 2026-03-10 01:03:00.568157 | orchestrator | 2026-03-10 01:03:00 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:03:00.568361 | orchestrator | 2026-03-10 01:03:00 | INFO  | Task 412ccaa6-306d-4043-918f-ad77e192b33d is in state STARTED 2026-03-10 01:03:00.569208 | orchestrator | 2026-03-10 01:03:00 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:03:00.571741 | orchestrator | 2026-03-10 01:03:00 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:03:00.573490 | orchestrator | 2026-03-10 01:03:00 | INFO  | Task 0ae0af8f-1222-4df0-a5ce-897fac9c1526 is in state SUCCESS 2026-03-10 01:03:00.577629 | orchestrator | 2026-03-10 01:03:00.577706 | orchestrator | 2026-03-10 01:03:00.577722 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-10 01:03:00.577736 | orchestrator | 2026-03-10 01:03:00.577865 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-10 01:03:00.577887 | orchestrator | Tuesday 10 March 2026 01:01:29 +0000 (0:00:00.238) 0:00:00.238 ********* 2026-03-10 01:03:00.578176 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-10 01:03:00.578222 | orchestrator | 2026-03-10 01:03:00.578244 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-10 01:03:00.578267 | orchestrator | Tuesday 10 March 2026 01:01:29 +0000 (0:00:00.211) 0:00:00.449 ********* 2026-03-10 01:03:00.578288 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-10 01:03:00.578308 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-10 01:03:00.578329 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-10 01:03:00.578347 | orchestrator | 2026-03-10 01:03:00.578364 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-10 01:03:00.578382 | orchestrator | Tuesday 10 March 2026 01:01:30 +0000 (0:00:01.201) 0:00:01.651 ********* 2026-03-10 01:03:00.578402 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-10 01:03:00.578453 | orchestrator | 2026-03-10 01:03:00.578473 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-10 01:03:00.578490 | orchestrator | Tuesday 10 March 2026 01:01:31 +0000 (0:00:01.297) 0:00:02.948 ********* 2026-03-10 01:03:00.578501 | orchestrator | changed: [testbed-manager] 2026-03-10 01:03:00.578512 | orchestrator | 2026-03-10 01:03:00.578523 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-10 01:03:00.578534 | orchestrator | Tuesday 10 March 2026 01:01:32 +0000 (0:00:00.874) 0:00:03.823 ********* 2026-03-10 01:03:00.578571 | orchestrator | changed: [testbed-manager] 2026-03-10 01:03:00.578583 | orchestrator | 2026-03-10 01:03:00.578595 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-10 01:03:00.578607 | orchestrator | Tuesday 10 March 2026 01:01:33 +0000 (0:00:00.909) 0:00:04.733 ********* 2026-03-10 01:03:00.578618 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-10 01:03:00.578629 | orchestrator | ok: [testbed-manager] 2026-03-10 01:03:00.578640 | orchestrator | 2026-03-10 01:03:00.578651 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-10 01:03:00.578662 | orchestrator | Tuesday 10 March 2026 01:02:13 +0000 (0:00:40.036) 0:00:44.769 ********* 2026-03-10 01:03:00.578673 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-10 01:03:00.578685 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-10 01:03:00.578696 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-10 01:03:00.578709 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-10 01:03:00.578726 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-10 01:03:00.578743 | orchestrator | 2026-03-10 01:03:00.578763 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-10 01:03:00.578782 | orchestrator | Tuesday 10 March 2026 01:02:18 +0000 (0:00:04.433) 0:00:49.202 ********* 2026-03-10 01:03:00.578801 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-10 01:03:00.578813 | orchestrator | 2026-03-10 01:03:00.578824 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-10 01:03:00.578835 | orchestrator | Tuesday 10 March 2026 01:02:18 +0000 (0:00:00.501) 0:00:49.704 ********* 2026-03-10 01:03:00.578846 | orchestrator | skipping: [testbed-manager] 2026-03-10 01:03:00.578857 | orchestrator | 2026-03-10 01:03:00.578867 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-10 01:03:00.578878 | orchestrator | Tuesday 10 March 2026 01:02:18 +0000 (0:00:00.133) 0:00:49.838 ********* 2026-03-10 01:03:00.578889 | orchestrator | skipping: [testbed-manager] 2026-03-10 01:03:00.578901 | orchestrator | 2026-03-10 01:03:00.578911 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-10 01:03:00.578922 | orchestrator | Tuesday 10 March 2026 01:02:19 +0000 (0:00:00.557) 0:00:50.396 ********* 2026-03-10 01:03:00.578933 | orchestrator | changed: [testbed-manager] 2026-03-10 01:03:00.578945 | orchestrator | 2026-03-10 01:03:00.578964 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-10 01:03:00.578982 | orchestrator | Tuesday 10 March 2026 01:02:20 +0000 (0:00:01.527) 0:00:51.923 ********* 2026-03-10 01:03:00.578999 | orchestrator | changed: [testbed-manager] 2026-03-10 01:03:00.579017 | orchestrator | 2026-03-10 01:03:00.579035 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-10 01:03:00.579053 | orchestrator | Tuesday 10 March 2026 01:02:21 +0000 (0:00:00.799) 0:00:52.722 ********* 2026-03-10 01:03:00.579073 | orchestrator | changed: [testbed-manager] 2026-03-10 01:03:00.579085 | orchestrator | 2026-03-10 01:03:00.579096 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-10 01:03:00.579123 | orchestrator | Tuesday 10 March 2026 01:02:22 +0000 (0:00:00.624) 0:00:53.347 ********* 2026-03-10 01:03:00.579135 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-10 01:03:00.579146 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-10 01:03:00.579157 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-10 01:03:00.579168 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-10 01:03:00.579178 | orchestrator | 2026-03-10 01:03:00.579190 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:03:00.579204 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 01:03:00.579225 | orchestrator | 2026-03-10 01:03:00.579256 | orchestrator | 2026-03-10 01:03:00.579289 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:03:00.579302 | orchestrator | Tuesday 10 March 2026 01:02:23 +0000 (0:00:01.623) 0:00:54.970 ********* 2026-03-10 01:03:00.579313 | orchestrator | =============================================================================== 2026-03-10 01:03:00.579324 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 40.04s 2026-03-10 01:03:00.579335 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.43s 2026-03-10 01:03:00.579346 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.62s 2026-03-10 01:03:00.579357 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.53s 2026-03-10 01:03:00.579373 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.30s 2026-03-10 01:03:00.579391 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.20s 2026-03-10 01:03:00.579410 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.91s 2026-03-10 01:03:00.579460 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.87s 2026-03-10 01:03:00.579479 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.80s 2026-03-10 01:03:00.579498 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.62s 2026-03-10 01:03:00.579518 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.56s 2026-03-10 01:03:00.579531 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.50s 2026-03-10 01:03:00.579542 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.21s 2026-03-10 01:03:00.579553 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2026-03-10 01:03:00.579564 | orchestrator | 2026-03-10 01:03:00.579575 | orchestrator | 2026-03-10 01:03:00.579586 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 01:03:00.579597 | orchestrator | 2026-03-10 01:03:00.579608 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 01:03:00.579618 | orchestrator | Tuesday 10 March 2026 01:02:29 +0000 (0:00:00.180) 0:00:00.180 ********* 2026-03-10 01:03:00.579629 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:03:00.579640 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:03:00.579651 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:03:00.579662 | orchestrator | 2026-03-10 01:03:00.579672 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 01:03:00.579683 | orchestrator | Tuesday 10 March 2026 01:02:29 +0000 (0:00:00.330) 0:00:00.510 ********* 2026-03-10 01:03:00.579694 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-10 01:03:00.579705 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-10 01:03:00.579716 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-10 01:03:00.579727 | orchestrator | 2026-03-10 01:03:00.579738 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-03-10 01:03:00.579749 | orchestrator | 2026-03-10 01:03:00.579762 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-03-10 01:03:00.579782 | orchestrator | Tuesday 10 March 2026 01:02:30 +0000 (0:00:00.844) 0:00:01.355 ********* 2026-03-10 01:03:00.579801 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:03:00.579813 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:03:00.579823 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:03:00.579834 | orchestrator | 2026-03-10 01:03:00.579845 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:03:00.579856 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:03:00.579868 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:03:00.579880 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:03:00.579900 | orchestrator | 2026-03-10 01:03:00.579911 | orchestrator | 2026-03-10 01:03:00.579922 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:03:00.579933 | orchestrator | Tuesday 10 March 2026 01:02:31 +0000 (0:00:00.881) 0:00:02.236 ********* 2026-03-10 01:03:00.579943 | orchestrator | =============================================================================== 2026-03-10 01:03:00.579954 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.88s 2026-03-10 01:03:00.579965 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.84s 2026-03-10 01:03:00.579976 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-03-10 01:03:00.579987 | orchestrator | 2026-03-10 01:03:00.579998 | orchestrator | 2026-03-10 01:03:00.580008 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 01:03:00.580019 | orchestrator | 2026-03-10 01:03:00.580030 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 01:03:00.580048 | orchestrator | Tuesday 10 March 2026 00:59:50 +0000 (0:00:00.292) 0:00:00.292 ********* 2026-03-10 01:03:00.580059 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:03:00.580070 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:03:00.580081 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:03:00.580092 | orchestrator | 2026-03-10 01:03:00.580103 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 01:03:00.580113 | orchestrator | Tuesday 10 March 2026 00:59:51 +0000 (0:00:00.340) 0:00:00.632 ********* 2026-03-10 01:03:00.580124 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-10 01:03:00.580135 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-10 01:03:00.580147 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-10 01:03:00.580157 | orchestrator | 2026-03-10 01:03:00.580169 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-10 01:03:00.580179 | orchestrator | 2026-03-10 01:03:00.580200 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-10 01:03:00.580211 | orchestrator | Tuesday 10 March 2026 00:59:51 +0000 (0:00:00.520) 0:00:01.153 ********* 2026-03-10 01:03:00.580223 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:03:00.580234 | orchestrator | 2026-03-10 01:03:00.580245 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-10 01:03:00.580256 | orchestrator | Tuesday 10 March 2026 00:59:52 +0000 (0:00:00.603) 0:00:01.756 ********* 2026-03-10 01:03:00.580275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-10 01:03:00.580292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-10 01:03:00.580319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-10 01:03:00.580339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-10 01:03:00.580354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-10 01:03:00.580365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-10 01:03:00.580377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-10 01:03:00.580396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-10 01:03:00.580408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-10 01:03:00.580446 | orchestrator | 2026-03-10 01:03:00.580467 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-10 01:03:00.580488 | orchestrator | Tuesday 10 March 2026 00:59:54 +0000 (0:00:01.910) 0:00:03.667 ********* 2026-03-10 01:03:00.580508 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:03:00.580528 | orchestrator | 2026-03-10 01:03:00.580546 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-10 01:03:00.580566 | orchestrator | Tuesday 10 March 2026 00:59:54 +0000 (0:00:00.142) 0:00:03.810 ********* 2026-03-10 01:03:00.580580 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:03:00.580600 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:03:00.580620 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:03:00.580639 | orchestrator | 2026-03-10 01:03:00.580654 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-10 01:03:00.580665 | orchestrator | Tuesday 10 March 2026 00:59:54 +0000 (0:00:00.465) 0:00:04.276 ********* 2026-03-10 01:03:00.580676 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-10 01:03:00.580687 | orchestrator | 2026-03-10 01:03:00.580700 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-10 01:03:00.580720 | orchestrator | Tuesday 10 March 2026 00:59:55 +0000 (0:00:01.074) 0:00:05.350 ********* 2026-03-10 01:03:00.580746 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:03:00.580764 | orchestrator | 2026-03-10 01:03:00.580780 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-10 01:03:00.580797 | orchestrator | Tuesday 10 March 2026 00:59:56 +0000 (0:00:00.640) 0:00:05.991 ********* 2026-03-10 01:03:00.580816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-10 01:03:00.580847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-10 01:03:00.580867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-10 01:03:00.580894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-10 01:03:00.580924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-10 01:03:00.580943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-10 01:03:00.580972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-10 01:03:00.580992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-10 01:03:00.581011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-10 01:03:00.581030 | orchestrator | 2026-03-10 01:03:00.581049 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-10 01:03:00.581067 | orchestrator | Tuesday 10 March 2026 01:00:00 +0000 (0:00:03.660) 0:00:09.651 ********* 2026-03-10 01:03:00.581107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-10 01:03:00.581130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-10 01:03:00.581173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-10 01:03:00.581194 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:03:00.581213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-10 01:03:00.581225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-10 01:03:00.581243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-10 01:03:00.581255 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:03:00.581277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-10 01:03:00.581300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-10 01:03:00.581312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-10 01:03:00.581323 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:03:00.581334 | orchestrator | 2026-03-10 01:03:00.581345 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-10 01:03:00.581356 | orchestrator | Tuesday 10 March 2026 01:00:00 +0000 (0:00:00.646) 0:00:10.298 ********* 2026-03-10 01:03:00.581368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-10 01:03:00.581386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-10 01:03:00.581921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-10 01:03:00.581961 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:03:00.581974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-10 01:03:00.581988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-10 01:03:00.582000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-10 01:03:00.582012 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:03:00.582086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-10 01:03:00.582122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-10 01:03:00.582155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-10 01:03:00.582177 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:03:00.582197 | orchestrator | 2026-03-10 01:03:00.582249 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-10 01:03:00.582271 | orchestrator | Tuesday 10 March 2026 01:00:01 +0000 (0:00:00.900) 0:00:11.199 ********* 2026-03-10 01:03:00.582293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-10 01:03:00.582331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-10 01:03:00.582364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-10 01:03:00.582388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-10 01:03:00.582401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-10 01:03:00.582413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-10 01:03:00.582455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-10 01:03:00.582469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-10 01:03:00.582496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-10 01:03:00.582539 | orchestrator | 2026-03-10 01:03:00.582560 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-10 01:03:00.582579 | orchestrator | Tuesday 10 March 2026 01:00:04 +0000 (0:00:03.134) 0:00:14.333 ********* 2026-03-10 01:03:00.582614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-10 01:03:00.582639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-10 01:03:00.582659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-10 01:03:00.582680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-10 01:03:00.582719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-10 01:03:00.582746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-10 01:03:00.582758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-10 01:03:00.582770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-10 01:03:00.582781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-10 01:03:00.582792 | orchestrator | 2026-03-10 01:03:00.582803 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-10 01:03:00.582815 | orchestrator | Tuesday 10 March 2026 01:00:10 +0000 (0:00:05.925) 0:00:20.258 ********* 2026-03-10 01:03:00.582827 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:03:00.582838 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:03:00.582849 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:03:00.582860 | orchestrator | 2026-03-10 01:03:00.582871 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-10 01:03:00.582889 | orchestrator | Tuesday 10 March 2026 01:00:12 +0000 (0:00:01.575) 0:00:21.834 ********* 2026-03-10 01:03:00.582899 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:03:00.582911 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:03:00.582922 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:03:00.582933 | orchestrator | 2026-03-10 01:03:00.582944 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-10 01:03:00.582955 | orchestrator | Tuesday 10 March 2026 01:00:13 +0000 (0:00:00.635) 0:00:22.469 ********* 2026-03-10 01:03:00.582966 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:03:00.582978 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:03:00.582988 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:03:00.582999 | orchestrator | 2026-03-10 01:03:00.583015 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-10 01:03:00.583027 | orchestrator | Tuesday 10 March 2026 01:00:13 +0000 (0:00:00.324) 0:00:22.794 ********* 2026-03-10 01:03:00.583038 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:03:00.583049 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:03:00.583060 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:03:00.583071 | orchestrator | 2026-03-10 01:03:00.583082 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-10 01:03:00.583093 | orchestrator | Tuesday 10 March 2026 01:00:14 +0000 (0:00:00.605) 0:00:23.400 ********* 2026-03-10 01:03:00.583112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-10 01:03:00.583124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-10 01:03:00.583136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-10 01:03:00.583148 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:03:00.583160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-10 01:03:00.583188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-10 01:03:00.583207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-10 01:03:00.583220 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:03:00.583232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-10 01:03:00.583244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-10 01:03:00.583256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-10 01:03:00.583274 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:03:00.583286 | orchestrator | 2026-03-10 01:03:00.583297 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-10 01:03:00.583308 | orchestrator | Tuesday 10 March 2026 01:00:14 +0000 (0:00:00.723) 0:00:24.123 ********* 2026-03-10 01:03:00.583319 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:03:00.583330 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:03:00.583341 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:03:00.583352 | orchestrator | 2026-03-10 01:03:00.583363 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-10 01:03:00.583374 | orchestrator | Tuesday 10 March 2026 01:00:15 +0000 (0:00:00.396) 0:00:24.520 ********* 2026-03-10 01:03:00.583385 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-10 01:03:00.583396 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-10 01:03:00.583407 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-10 01:03:00.583452 | orchestrator | 2026-03-10 01:03:00.583478 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-10 01:03:00.583490 | orchestrator | Tuesday 10 March 2026 01:00:16 +0000 (0:00:01.703) 0:00:26.224 ********* 2026-03-10 01:03:00.583501 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-10 01:03:00.583512 | orchestrator | 2026-03-10 01:03:00.583522 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-10 01:03:00.583533 | orchestrator | Tuesday 10 March 2026 01:00:18 +0000 (0:00:01.363) 0:00:27.587 ********* 2026-03-10 01:03:00.583544 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:03:00.583555 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:03:00.583566 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:03:00.583576 | orchestrator | 2026-03-10 01:03:00.583587 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-10 01:03:00.583598 | orchestrator | Tuesday 10 March 2026 01:00:19 +0000 (0:00:00.918) 0:00:28.506 ********* 2026-03-10 01:03:00.583615 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-10 01:03:00.583627 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-10 01:03:00.583638 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-10 01:03:00.583649 | orchestrator | 2026-03-10 01:03:00.583660 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-10 01:03:00.583671 | orchestrator | Tuesday 10 March 2026 01:00:20 +0000 (0:00:01.346) 0:00:29.853 ********* 2026-03-10 01:03:00.583682 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:03:00.583693 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:03:00.583704 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:03:00.583715 | orchestrator | 2026-03-10 01:03:00.583726 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-10 01:03:00.583736 | orchestrator | Tuesday 10 March 2026 01:00:20 +0000 (0:00:00.340) 0:00:30.193 ********* 2026-03-10 01:03:00.583747 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-10 01:03:00.583758 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-10 01:03:00.583769 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-10 01:03:00.583780 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-10 01:03:00.583798 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-10 01:03:00.583809 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-10 01:03:00.583821 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-10 01:03:00.583832 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-10 01:03:00.583843 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-10 01:03:00.583854 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-10 01:03:00.583864 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-10 01:03:00.583875 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-10 01:03:00.583886 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-10 01:03:00.583897 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-10 01:03:00.583908 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-10 01:03:00.583919 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-10 01:03:00.583930 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-10 01:03:00.583941 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-10 01:03:00.583952 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-10 01:03:00.583963 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-10 01:03:00.583974 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-10 01:03:00.583984 | orchestrator | 2026-03-10 01:03:00.583995 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-10 01:03:00.584006 | orchestrator | Tuesday 10 March 2026 01:00:30 +0000 (0:00:09.606) 0:00:39.800 ********* 2026-03-10 01:03:00.584017 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-10 01:03:00.584027 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-10 01:03:00.584038 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-10 01:03:00.584049 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-10 01:03:00.584060 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-10 01:03:00.584071 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-10 01:03:00.584081 | orchestrator | 2026-03-10 01:03:00.584092 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-03-10 01:03:00.584108 | orchestrator | Tuesday 10 March 2026 01:00:33 +0000 (0:00:03.342) 0:00:43.142 ********* 2026-03-10 01:03:00.584129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-10 01:03:00.584149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-10 01:03:00.584163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-10 01:03:00.584175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-10 01:03:00.584192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-10 01:03:00.584209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-10 01:03:00.584227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-10 01:03:00.584239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-10 01:03:00.584250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-10 01:03:00.584262 | orchestrator | 2026-03-10 01:03:00.584273 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-10 01:03:00.584284 | orchestrator | Tuesday 10 March 2026 01:00:36 +0000 (0:00:02.427) 0:00:45.570 ********* 2026-03-10 01:03:00.584295 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:03:00.584306 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:03:00.584317 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:03:00.584329 | orchestrator | 2026-03-10 01:03:00.584340 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-10 01:03:00.584351 | orchestrator | Tuesday 10 March 2026 01:00:36 +0000 (0:00:00.344) 0:00:45.914 ********* 2026-03-10 01:03:00.584362 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:03:00.584373 | orchestrator | 2026-03-10 01:03:00.584384 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-10 01:03:00.584395 | orchestrator | Tuesday 10 March 2026 01:00:39 +0000 (0:00:02.450) 0:00:48.364 ********* 2026-03-10 01:03:00.584407 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:03:00.584476 | orchestrator | 2026-03-10 01:03:00.584491 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-10 01:03:00.584502 | orchestrator | Tuesday 10 March 2026 01:00:41 +0000 (0:00:02.436) 0:00:50.801 ********* 2026-03-10 01:03:00.584513 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:03:00.584524 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:03:00.584535 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:03:00.584546 | orchestrator | 2026-03-10 01:03:00.584557 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-10 01:03:00.584592 | orchestrator | Tuesday 10 March 2026 01:00:42 +0000 (0:00:01.175) 0:00:51.976 ********* 2026-03-10 01:03:00.584603 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:03:00.584614 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:03:00.584625 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:03:00.584636 | orchestrator | 2026-03-10 01:03:00.584647 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-10 01:03:00.584665 | orchestrator | Tuesday 10 March 2026 01:00:42 +0000 (0:00:00.350) 0:00:52.327 ********* 2026-03-10 01:03:00.584677 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:03:00.584688 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:03:00.584699 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:03:00.584710 | orchestrator | 2026-03-10 01:03:00.584720 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-10 01:03:00.584732 | orchestrator | Tuesday 10 March 2026 01:00:43 +0000 (0:00:00.382) 0:00:52.709 ********* 2026-03-10 01:03:00.584743 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:03:00.584753 | orchestrator | 2026-03-10 01:03:00.584764 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-10 01:03:00.584775 | orchestrator | Tuesday 10 March 2026 01:00:59 +0000 (0:00:16.045) 0:01:08.754 ********* 2026-03-10 01:03:00.584786 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:03:00.584797 | orchestrator | 2026-03-10 01:03:00.584816 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-10 01:03:00.584827 | orchestrator | Tuesday 10 March 2026 01:01:11 +0000 (0:00:11.856) 0:01:20.611 ********* 2026-03-10 01:03:00.584838 | orchestrator | 2026-03-10 01:03:00.584849 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-10 01:03:00.584860 | orchestrator | Tuesday 10 March 2026 01:01:11 +0000 (0:00:00.065) 0:01:20.676 ********* 2026-03-10 01:03:00.584871 | orchestrator | 2026-03-10 01:03:00.584882 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-10 01:03:00.584893 | orchestrator | Tuesday 10 March 2026 01:01:11 +0000 (0:00:00.065) 0:01:20.742 ********* 2026-03-10 01:03:00.584904 | orchestrator | 2026-03-10 01:03:00.584915 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-10 01:03:00.584926 | orchestrator | Tuesday 10 March 2026 01:01:11 +0000 (0:00:00.066) 0:01:20.809 ********* 2026-03-10 01:03:00.584937 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:03:00.584948 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:03:00.584959 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:03:00.584970 | orchestrator | 2026-03-10 01:03:00.584981 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-10 01:03:00.584992 | orchestrator | Tuesday 10 March 2026 01:01:38 +0000 (0:00:26.810) 0:01:47.620 ********* 2026-03-10 01:03:00.585002 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:03:00.585012 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:03:00.585022 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:03:00.585032 | orchestrator | 2026-03-10 01:03:00.585041 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-10 01:03:00.585051 | orchestrator | Tuesday 10 March 2026 01:01:48 +0000 (0:00:10.439) 0:01:58.059 ********* 2026-03-10 01:03:00.585060 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:03:00.585070 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:03:00.585080 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:03:00.585090 | orchestrator | 2026-03-10 01:03:00.585100 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-10 01:03:00.585109 | orchestrator | Tuesday 10 March 2026 01:02:01 +0000 (0:00:12.592) 0:02:10.652 ********* 2026-03-10 01:03:00.585119 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:03:00.585129 | orchestrator | 2026-03-10 01:03:00.585139 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-10 01:03:00.585148 | orchestrator | Tuesday 10 March 2026 01:02:02 +0000 (0:00:00.952) 0:02:11.604 ********* 2026-03-10 01:03:00.585165 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:03:00.585175 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:03:00.585184 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:03:00.585194 | orchestrator | 2026-03-10 01:03:00.585204 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-10 01:03:00.585214 | orchestrator | Tuesday 10 March 2026 01:02:03 +0000 (0:00:00.792) 0:02:12.397 ********* 2026-03-10 01:03:00.585224 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:03:00.585233 | orchestrator | 2026-03-10 01:03:00.585243 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-10 01:03:00.585253 | orchestrator | Tuesday 10 March 2026 01:02:04 +0000 (0:00:01.900) 0:02:14.297 ********* 2026-03-10 01:03:00.585262 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-10 01:03:00.585272 | orchestrator | 2026-03-10 01:03:00.585282 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-03-10 01:03:00.585292 | orchestrator | Tuesday 10 March 2026 01:02:17 +0000 (0:00:12.458) 0:02:26.755 ********* 2026-03-10 01:03:00.585302 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-10 01:03:00.585311 | orchestrator | 2026-03-10 01:03:00.585321 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-03-10 01:03:00.585330 | orchestrator | Tuesday 10 March 2026 01:02:44 +0000 (0:00:26.653) 0:02:53.408 ********* 2026-03-10 01:03:00.585340 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-10 01:03:00.585350 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-10 01:03:00.585360 | orchestrator | 2026-03-10 01:03:00.585370 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-10 01:03:00.585380 | orchestrator | Tuesday 10 March 2026 01:02:51 +0000 (0:00:07.015) 0:03:00.424 ********* 2026-03-10 01:03:00.585390 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:03:00.585400 | orchestrator | 2026-03-10 01:03:00.585409 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-10 01:03:00.585483 | orchestrator | Tuesday 10 March 2026 01:02:51 +0000 (0:00:00.148) 0:03:00.572 ********* 2026-03-10 01:03:00.585502 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:03:00.585518 | orchestrator | 2026-03-10 01:03:00.585528 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-10 01:03:00.585543 | orchestrator | Tuesday 10 March 2026 01:02:51 +0000 (0:00:00.261) 0:03:00.835 ********* 2026-03-10 01:03:00.585559 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:03:00.585575 | orchestrator | 2026-03-10 01:03:00.585598 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-03-10 01:03:00.585615 | orchestrator | Tuesday 10 March 2026 01:02:51 +0000 (0:00:00.175) 0:03:01.010 ********* 2026-03-10 01:03:00.585629 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:03:00.585643 | orchestrator | 2026-03-10 01:03:00.585657 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-10 01:03:00.585674 | orchestrator | Tuesday 10 March 2026 01:02:52 +0000 (0:00:01.129) 0:03:02.139 ********* 2026-03-10 01:03:00.585692 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:03:00.585709 | orchestrator | 2026-03-10 01:03:00.585726 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-10 01:03:00.585741 | orchestrator | Tuesday 10 March 2026 01:02:56 +0000 (0:00:03.534) 0:03:05.673 ********* 2026-03-10 01:03:00.585751 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:03:00.585770 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:03:00.585780 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:03:00.585790 | orchestrator | 2026-03-10 01:03:00.585800 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:03:00.585810 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-10 01:03:00.585830 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-10 01:03:00.585840 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-10 01:03:00.585850 | orchestrator | 2026-03-10 01:03:00.585859 | orchestrator | 2026-03-10 01:03:00.585869 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:03:00.585879 | orchestrator | Tuesday 10 March 2026 01:02:56 +0000 (0:00:00.547) 0:03:06.220 ********* 2026-03-10 01:03:00.585889 | orchestrator | =============================================================================== 2026-03-10 01:03:00.585898 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 26.81s 2026-03-10 01:03:00.585908 | orchestrator | service-ks-register : keystone | Creating services --------------------- 26.65s 2026-03-10 01:03:00.585917 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 16.05s 2026-03-10 01:03:00.585927 | orchestrator | keystone : Restart keystone container ---------------------------------- 12.59s 2026-03-10 01:03:00.585936 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.46s 2026-03-10 01:03:00.585946 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.86s 2026-03-10 01:03:00.585956 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.44s 2026-03-10 01:03:00.585965 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.61s 2026-03-10 01:03:00.585975 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.02s 2026-03-10 01:03:00.585985 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.93s 2026-03-10 01:03:00.585994 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.66s 2026-03-10 01:03:00.586003 | orchestrator | keystone : Creating default user role ----------------------------------- 3.53s 2026-03-10 01:03:00.586011 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.34s 2026-03-10 01:03:00.586058 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.13s 2026-03-10 01:03:00.586078 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.45s 2026-03-10 01:03:00.586094 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.44s 2026-03-10 01:03:00.586106 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.43s 2026-03-10 01:03:00.586118 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.91s 2026-03-10 01:03:00.586130 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.90s 2026-03-10 01:03:00.586143 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.70s 2026-03-10 01:03:00.586154 | orchestrator | 2026-03-10 01:03:00 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:03.643846 | orchestrator | 2026-03-10 01:03:03 | INFO  | Task c102f140-edab-4aa4-a9b1-f37054e0223e is in state STARTED 2026-03-10 01:03:03.644642 | orchestrator | 2026-03-10 01:03:03 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:03:03.645642 | orchestrator | 2026-03-10 01:03:03 | INFO  | Task 412ccaa6-306d-4043-918f-ad77e192b33d is in state STARTED 2026-03-10 01:03:03.646629 | orchestrator | 2026-03-10 01:03:03 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:03:03.647634 | orchestrator | 2026-03-10 01:03:03 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:03:03.647667 | orchestrator | 2026-03-10 01:03:03 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:06.753968 | orchestrator | 2026-03-10 01:03:06 | INFO  | Task c102f140-edab-4aa4-a9b1-f37054e0223e is in state STARTED 2026-03-10 01:03:06.754143 | orchestrator | 2026-03-10 01:03:06 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:03:06.754171 | orchestrator | 2026-03-10 01:03:06 | INFO  | Task 412ccaa6-306d-4043-918f-ad77e192b33d is in state STARTED 2026-03-10 01:03:06.754181 | orchestrator | 2026-03-10 01:03:06 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:03:06.754190 | orchestrator | 2026-03-10 01:03:06 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:03:06.754199 | orchestrator | 2026-03-10 01:03:06 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:09.909688 | orchestrator | 2026-03-10 01:03:09 | INFO  | Task c102f140-edab-4aa4-a9b1-f37054e0223e is in state STARTED 2026-03-10 01:03:09.910174 | orchestrator | 2026-03-10 01:03:09 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:03:09.912363 | orchestrator | 2026-03-10 01:03:09 | INFO  | Task 412ccaa6-306d-4043-918f-ad77e192b33d is in state STARTED 2026-03-10 01:03:09.913189 | orchestrator | 2026-03-10 01:03:09 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:03:09.914179 | orchestrator | 2026-03-10 01:03:09 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:03:09.914224 | orchestrator | 2026-03-10 01:03:09 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:12.961642 | orchestrator | 2026-03-10 01:03:12 | INFO  | Task c102f140-edab-4aa4-a9b1-f37054e0223e is in state STARTED 2026-03-10 01:03:12.962002 | orchestrator | 2026-03-10 01:03:12 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:03:12.963096 | orchestrator | 2026-03-10 01:03:12 | INFO  | Task 412ccaa6-306d-4043-918f-ad77e192b33d is in state STARTED 2026-03-10 01:03:12.963859 | orchestrator | 2026-03-10 01:03:12 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:03:12.964984 | orchestrator | 2026-03-10 01:03:12 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:03:12.964997 | orchestrator | 2026-03-10 01:03:12 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:16.047308 | orchestrator | 2026-03-10 01:03:16 | INFO  | Task c102f140-edab-4aa4-a9b1-f37054e0223e is in state STARTED 2026-03-10 01:03:16.047819 | orchestrator | 2026-03-10 01:03:16 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:03:16.048625 | orchestrator | 2026-03-10 01:03:16 | INFO  | Task 412ccaa6-306d-4043-918f-ad77e192b33d is in state STARTED 2026-03-10 01:03:16.049086 | orchestrator | 2026-03-10 01:03:16 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:03:16.049891 | orchestrator | 2026-03-10 01:03:16 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:03:16.049912 | orchestrator | 2026-03-10 01:03:16 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:19.084712 | orchestrator | 2026-03-10 01:03:19 | INFO  | Task c102f140-edab-4aa4-a9b1-f37054e0223e is in state STARTED 2026-03-10 01:03:19.084795 | orchestrator | 2026-03-10 01:03:19 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:03:19.084937 | orchestrator | 2026-03-10 01:03:19 | INFO  | Task 412ccaa6-306d-4043-918f-ad77e192b33d is in state STARTED 2026-03-10 01:03:19.085767 | orchestrator | 2026-03-10 01:03:19 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:03:19.086594 | orchestrator | 2026-03-10 01:03:19 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:03:19.086642 | orchestrator | 2026-03-10 01:03:19 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:22.111895 | orchestrator | 2026-03-10 01:03:22 | INFO  | Task c102f140-edab-4aa4-a9b1-f37054e0223e is in state STARTED 2026-03-10 01:03:22.112176 | orchestrator | 2026-03-10 01:03:22 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:03:22.112966 | orchestrator | 2026-03-10 01:03:22 | INFO  | Task 412ccaa6-306d-4043-918f-ad77e192b33d is in state SUCCESS 2026-03-10 01:03:22.113565 | orchestrator | 2026-03-10 01:03:22 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:03:22.116564 | orchestrator | 2026-03-10 01:03:22 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:03:22.116602 | orchestrator | 2026-03-10 01:03:22 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:25.144149 | orchestrator | 2026-03-10 01:03:25 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:03:25.144662 | orchestrator | 2026-03-10 01:03:25 | INFO  | Task c102f140-edab-4aa4-a9b1-f37054e0223e is in state STARTED 2026-03-10 01:03:25.145050 | orchestrator | 2026-03-10 01:03:25 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:03:25.147959 | orchestrator | 2026-03-10 01:03:25 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:03:25.148555 | orchestrator | 2026-03-10 01:03:25 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:03:25.148589 | orchestrator | 2026-03-10 01:03:25 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:28.177497 | orchestrator | 2026-03-10 01:03:28 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:03:28.177772 | orchestrator | 2026-03-10 01:03:28 | INFO  | Task c102f140-edab-4aa4-a9b1-f37054e0223e is in state STARTED 2026-03-10 01:03:28.179208 | orchestrator | 2026-03-10 01:03:28 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:03:28.179274 | orchestrator | 2026-03-10 01:03:28 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:03:28.180234 | orchestrator | 2026-03-10 01:03:28 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:03:28.180268 | orchestrator | 2026-03-10 01:03:28 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:31.239894 | orchestrator | 2026-03-10 01:03:31 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:03:31.239982 | orchestrator | 2026-03-10 01:03:31 | INFO  | Task c102f140-edab-4aa4-a9b1-f37054e0223e is in state STARTED 2026-03-10 01:03:31.239998 | orchestrator | 2026-03-10 01:03:31 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:03:31.240009 | orchestrator | 2026-03-10 01:03:31 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:03:31.240020 | orchestrator | 2026-03-10 01:03:31 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:03:31.240031 | orchestrator | 2026-03-10 01:03:31 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:34.250470 | orchestrator | 2026-03-10 01:03:34 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:03:34.250573 | orchestrator | 2026-03-10 01:03:34 | INFO  | Task c102f140-edab-4aa4-a9b1-f37054e0223e is in state STARTED 2026-03-10 01:03:34.253631 | orchestrator | 2026-03-10 01:03:34 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:03:34.254318 | orchestrator | 2026-03-10 01:03:34 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:03:34.255376 | orchestrator | 2026-03-10 01:03:34 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:03:34.255448 | orchestrator | 2026-03-10 01:03:34 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:37.300250 | orchestrator | 2026-03-10 01:03:37 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:03:37.300667 | orchestrator | 2026-03-10 01:03:37 | INFO  | Task c102f140-edab-4aa4-a9b1-f37054e0223e is in state STARTED 2026-03-10 01:03:37.301532 | orchestrator | 2026-03-10 01:03:37 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:03:37.302763 | orchestrator | 2026-03-10 01:03:37 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:03:37.303972 | orchestrator | 2026-03-10 01:03:37 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:03:37.304007 | orchestrator | 2026-03-10 01:03:37 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:40.358845 | orchestrator | 2026-03-10 01:03:40 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:03:40.359007 | orchestrator | 2026-03-10 01:03:40 | INFO  | Task c102f140-edab-4aa4-a9b1-f37054e0223e is in state STARTED 2026-03-10 01:03:40.360001 | orchestrator | 2026-03-10 01:03:40 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:03:40.360814 | orchestrator | 2026-03-10 01:03:40 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:03:40.361522 | orchestrator | 2026-03-10 01:03:40 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:03:40.361616 | orchestrator | 2026-03-10 01:03:40 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:43.398854 | orchestrator | 2026-03-10 01:03:43 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:03:43.399894 | orchestrator | 2026-03-10 01:03:43 | INFO  | Task c102f140-edab-4aa4-a9b1-f37054e0223e is in state STARTED 2026-03-10 01:03:43.401484 | orchestrator | 2026-03-10 01:03:43 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:03:43.403130 | orchestrator | 2026-03-10 01:03:43 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:03:43.405167 | orchestrator | 2026-03-10 01:03:43 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:03:43.405211 | orchestrator | 2026-03-10 01:03:43 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:46.442671 | orchestrator | 2026-03-10 01:03:46 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:03:46.443060 | orchestrator | 2026-03-10 01:03:46 | INFO  | Task c102f140-edab-4aa4-a9b1-f37054e0223e is in state STARTED 2026-03-10 01:03:46.444594 | orchestrator | 2026-03-10 01:03:46 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:03:46.446893 | orchestrator | 2026-03-10 01:03:46 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:03:46.448265 | orchestrator | 2026-03-10 01:03:46 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:03:46.448288 | orchestrator | 2026-03-10 01:03:46 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:49.479723 | orchestrator | 2026-03-10 01:03:49 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:03:49.479989 | orchestrator | 2026-03-10 01:03:49 | INFO  | Task c102f140-edab-4aa4-a9b1-f37054e0223e is in state STARTED 2026-03-10 01:03:49.482326 | orchestrator | 2026-03-10 01:03:49 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:03:49.482721 | orchestrator | 2026-03-10 01:03:49 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:03:49.483857 | orchestrator | 2026-03-10 01:03:49 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:03:49.483932 | orchestrator | 2026-03-10 01:03:49 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:52.519694 | orchestrator | 2026-03-10 01:03:52 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:03:52.520228 | orchestrator | 2026-03-10 01:03:52 | INFO  | Task c102f140-edab-4aa4-a9b1-f37054e0223e is in state STARTED 2026-03-10 01:03:52.521207 | orchestrator | 2026-03-10 01:03:52 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:03:52.522476 | orchestrator | 2026-03-10 01:03:52 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:03:52.523332 | orchestrator | 2026-03-10 01:03:52 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:03:52.523494 | orchestrator | 2026-03-10 01:03:52 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:55.560136 | orchestrator | 2026-03-10 01:03:55 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:03:55.560367 | orchestrator | 2026-03-10 01:03:55 | INFO  | Task c102f140-edab-4aa4-a9b1-f37054e0223e is in state SUCCESS 2026-03-10 01:03:55.560810 | orchestrator | 2026-03-10 01:03:55.560838 | orchestrator | 2026-03-10 01:03:55.560849 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 01:03:55.560880 | orchestrator | 2026-03-10 01:03:55.560916 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 01:03:55.560938 | orchestrator | Tuesday 10 March 2026 01:02:38 +0000 (0:00:00.829) 0:00:00.829 ********* 2026-03-10 01:03:55.560956 | orchestrator | ok: [testbed-manager] 2026-03-10 01:03:55.560975 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:03:55.561004 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:03:55.561022 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:03:55.561032 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:03:55.561042 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:03:55.561052 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:03:55.561061 | orchestrator | 2026-03-10 01:03:55.561071 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 01:03:55.561081 | orchestrator | Tuesday 10 March 2026 01:02:39 +0000 (0:00:01.585) 0:00:02.414 ********* 2026-03-10 01:03:55.561092 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-10 01:03:55.561102 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-10 01:03:55.561111 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-10 01:03:55.561121 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-10 01:03:55.561131 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-10 01:03:55.561140 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-10 01:03:55.561150 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-10 01:03:55.561159 | orchestrator | 2026-03-10 01:03:55.561169 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-10 01:03:55.561179 | orchestrator | 2026-03-10 01:03:55.561203 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-10 01:03:55.561213 | orchestrator | Tuesday 10 March 2026 01:02:40 +0000 (0:00:01.026) 0:00:03.440 ********* 2026-03-10 01:03:55.561224 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:03:55.561256 | orchestrator | 2026-03-10 01:03:55.561266 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-03-10 01:03:55.561276 | orchestrator | Tuesday 10 March 2026 01:02:42 +0000 (0:00:01.826) 0:00:05.267 ********* 2026-03-10 01:03:55.561288 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-03-10 01:03:55.561306 | orchestrator | 2026-03-10 01:03:55.561323 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-03-10 01:03:55.561340 | orchestrator | Tuesday 10 March 2026 01:02:47 +0000 (0:00:04.395) 0:00:09.662 ********* 2026-03-10 01:03:55.561357 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-10 01:03:55.561375 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-10 01:03:55.561391 | orchestrator | 2026-03-10 01:03:55.561485 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-10 01:03:55.561505 | orchestrator | Tuesday 10 March 2026 01:02:55 +0000 (0:00:07.838) 0:00:17.500 ********* 2026-03-10 01:03:55.561523 | orchestrator | ok: [testbed-manager] => (item=service) 2026-03-10 01:03:55.561540 | orchestrator | 2026-03-10 01:03:55.561553 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-10 01:03:55.561576 | orchestrator | Tuesday 10 March 2026 01:02:59 +0000 (0:00:04.100) 0:00:21.601 ********* 2026-03-10 01:03:55.561595 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-10 01:03:55.561607 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-03-10 01:03:55.561618 | orchestrator | 2026-03-10 01:03:55.561629 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-10 01:03:55.561640 | orchestrator | Tuesday 10 March 2026 01:03:05 +0000 (0:00:06.700) 0:00:28.302 ********* 2026-03-10 01:03:55.561652 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-03-10 01:03:55.561663 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-03-10 01:03:55.561674 | orchestrator | 2026-03-10 01:03:55.561685 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-03-10 01:03:55.561696 | orchestrator | Tuesday 10 March 2026 01:03:15 +0000 (0:00:09.995) 0:00:38.298 ********* 2026-03-10 01:03:55.561707 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-03-10 01:03:55.561718 | orchestrator | 2026-03-10 01:03:55.561729 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:03:55.561739 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:03:55.561751 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:03:55.561763 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:03:55.561775 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:03:55.561786 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:03:55.561810 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:03:55.561821 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:03:55.561831 | orchestrator | 2026-03-10 01:03:55.561841 | orchestrator | 2026-03-10 01:03:55.561851 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:03:55.561861 | orchestrator | Tuesday 10 March 2026 01:03:20 +0000 (0:00:05.100) 0:00:43.398 ********* 2026-03-10 01:03:55.561881 | orchestrator | =============================================================================== 2026-03-10 01:03:55.561891 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------ 10.00s 2026-03-10 01:03:55.561901 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 7.84s 2026-03-10 01:03:55.561911 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 6.70s 2026-03-10 01:03:55.561920 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.10s 2026-03-10 01:03:55.561930 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.40s 2026-03-10 01:03:55.561940 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 4.10s 2026-03-10 01:03:55.561949 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.83s 2026-03-10 01:03:55.561959 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.59s 2026-03-10 01:03:55.561968 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.03s 2026-03-10 01:03:55.561978 | orchestrator | 2026-03-10 01:03:55.561994 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-10 01:03:55.562006 | orchestrator | 2.16.14 2026-03-10 01:03:55.562073 | orchestrator | 2026-03-10 01:03:55.562099 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-10 01:03:55.562117 | orchestrator | 2026-03-10 01:03:55.562135 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-10 01:03:55.562153 | orchestrator | Tuesday 10 March 2026 01:02:29 +0000 (0:00:00.299) 0:00:00.299 ********* 2026-03-10 01:03:55.562170 | orchestrator | changed: [testbed-manager] 2026-03-10 01:03:55.562180 | orchestrator | 2026-03-10 01:03:55.562189 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-10 01:03:55.562199 | orchestrator | Tuesday 10 March 2026 01:02:30 +0000 (0:00:01.465) 0:00:01.764 ********* 2026-03-10 01:03:55.562208 | orchestrator | changed: [testbed-manager] 2026-03-10 01:03:55.562218 | orchestrator | 2026-03-10 01:03:55.562227 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-10 01:03:55.562237 | orchestrator | Tuesday 10 March 2026 01:02:32 +0000 (0:00:01.251) 0:00:03.016 ********* 2026-03-10 01:03:55.562247 | orchestrator | changed: [testbed-manager] 2026-03-10 01:03:55.562256 | orchestrator | 2026-03-10 01:03:55.562265 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-10 01:03:55.562275 | orchestrator | Tuesday 10 March 2026 01:02:33 +0000 (0:00:01.113) 0:00:04.129 ********* 2026-03-10 01:03:55.562284 | orchestrator | changed: [testbed-manager] 2026-03-10 01:03:55.562297 | orchestrator | 2026-03-10 01:03:55.562312 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-10 01:03:55.562328 | orchestrator | Tuesday 10 March 2026 01:02:34 +0000 (0:00:01.525) 0:00:05.654 ********* 2026-03-10 01:03:55.562345 | orchestrator | changed: [testbed-manager] 2026-03-10 01:03:55.562360 | orchestrator | 2026-03-10 01:03:55.562370 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-10 01:03:55.562380 | orchestrator | Tuesday 10 March 2026 01:02:35 +0000 (0:00:01.220) 0:00:06.875 ********* 2026-03-10 01:03:55.562389 | orchestrator | changed: [testbed-manager] 2026-03-10 01:03:55.562445 | orchestrator | 2026-03-10 01:03:55.562457 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-10 01:03:55.562466 | orchestrator | Tuesday 10 March 2026 01:02:36 +0000 (0:00:00.972) 0:00:07.848 ********* 2026-03-10 01:03:55.562476 | orchestrator | changed: [testbed-manager] 2026-03-10 01:03:55.562486 | orchestrator | 2026-03-10 01:03:55.562495 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-10 01:03:55.562510 | orchestrator | Tuesday 10 March 2026 01:02:38 +0000 (0:00:01.138) 0:00:08.987 ********* 2026-03-10 01:03:55.562534 | orchestrator | changed: [testbed-manager] 2026-03-10 01:03:55.562552 | orchestrator | 2026-03-10 01:03:55.562576 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-10 01:03:55.562606 | orchestrator | Tuesday 10 March 2026 01:02:39 +0000 (0:00:01.386) 0:00:10.373 ********* 2026-03-10 01:03:55.562622 | orchestrator | changed: [testbed-manager] 2026-03-10 01:03:55.562637 | orchestrator | 2026-03-10 01:03:55.562653 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-10 01:03:55.562669 | orchestrator | Tuesday 10 March 2026 01:03:28 +0000 (0:00:49.203) 0:00:59.576 ********* 2026-03-10 01:03:55.562685 | orchestrator | skipping: [testbed-manager] 2026-03-10 01:03:55.562703 | orchestrator | 2026-03-10 01:03:55.562719 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-10 01:03:55.562737 | orchestrator | 2026-03-10 01:03:55.562748 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-10 01:03:55.562757 | orchestrator | Tuesday 10 March 2026 01:03:28 +0000 (0:00:00.196) 0:00:59.773 ********* 2026-03-10 01:03:55.562767 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:03:55.562776 | orchestrator | 2026-03-10 01:03:55.562786 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-10 01:03:55.562795 | orchestrator | 2026-03-10 01:03:55.562805 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-10 01:03:55.562815 | orchestrator | Tuesday 10 March 2026 01:03:40 +0000 (0:00:11.652) 0:01:11.426 ********* 2026-03-10 01:03:55.562824 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:03:55.562834 | orchestrator | 2026-03-10 01:03:55.562844 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-10 01:03:55.562853 | orchestrator | 2026-03-10 01:03:55.562863 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-10 01:03:55.562883 | orchestrator | Tuesday 10 March 2026 01:03:51 +0000 (0:00:11.440) 0:01:22.867 ********* 2026-03-10 01:03:55.562893 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:03:55.562903 | orchestrator | 2026-03-10 01:03:55.562913 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:03:55.562922 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-10 01:03:55.562932 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:03:55.562942 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:03:55.562952 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:03:55.562962 | orchestrator | 2026-03-10 01:03:55.562972 | orchestrator | 2026-03-10 01:03:55.562981 | orchestrator | 2026-03-10 01:03:55.562991 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:03:55.563001 | orchestrator | Tuesday 10 March 2026 01:03:53 +0000 (0:00:01.256) 0:01:24.123 ********* 2026-03-10 01:03:55.563010 | orchestrator | =============================================================================== 2026-03-10 01:03:55.563020 | orchestrator | Create admin user ------------------------------------------------------ 49.20s 2026-03-10 01:03:55.563037 | orchestrator | Restart ceph manager service ------------------------------------------- 24.35s 2026-03-10 01:03:55.563046 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.53s 2026-03-10 01:03:55.563056 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.47s 2026-03-10 01:03:55.563065 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.39s 2026-03-10 01:03:55.563075 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.25s 2026-03-10 01:03:55.563085 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.22s 2026-03-10 01:03:55.563094 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.14s 2026-03-10 01:03:55.563104 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.11s 2026-03-10 01:03:55.563121 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.97s 2026-03-10 01:03:55.563130 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.20s 2026-03-10 01:03:55.563140 | orchestrator | 2026-03-10 01:03:55 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:03:55.563151 | orchestrator | 2026-03-10 01:03:55 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:03:55.563906 | orchestrator | 2026-03-10 01:03:55 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:03:55.563952 | orchestrator | 2026-03-10 01:03:55 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:58.606084 | orchestrator | 2026-03-10 01:03:58 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:03:58.606222 | orchestrator | 2026-03-10 01:03:58 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:03:58.606361 | orchestrator | 2026-03-10 01:03:58 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:03:58.608364 | orchestrator | 2026-03-10 01:03:58 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:03:58.608439 | orchestrator | 2026-03-10 01:03:58 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:01.646766 | orchestrator | 2026-03-10 01:04:01 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:04:01.647337 | orchestrator | 2026-03-10 01:04:01 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:04:01.648580 | orchestrator | 2026-03-10 01:04:01 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:04:01.649481 | orchestrator | 2026-03-10 01:04:01 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:04:01.649532 | orchestrator | 2026-03-10 01:04:01 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:04.686522 | orchestrator | 2026-03-10 01:04:04 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:04:04.686890 | orchestrator | 2026-03-10 01:04:04 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:04:04.687743 | orchestrator | 2026-03-10 01:04:04 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:04:04.688643 | orchestrator | 2026-03-10 01:04:04 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:04:04.688695 | orchestrator | 2026-03-10 01:04:04 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:07.722314 | orchestrator | 2026-03-10 01:04:07 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:04:07.723103 | orchestrator | 2026-03-10 01:04:07 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:04:07.724635 | orchestrator | 2026-03-10 01:04:07 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:04:07.725337 | orchestrator | 2026-03-10 01:04:07 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:04:07.725591 | orchestrator | 2026-03-10 01:04:07 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:10.757492 | orchestrator | 2026-03-10 01:04:10 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:04:10.758155 | orchestrator | 2026-03-10 01:04:10 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:04:10.758807 | orchestrator | 2026-03-10 01:04:10 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:04:10.759859 | orchestrator | 2026-03-10 01:04:10 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:04:10.759886 | orchestrator | 2026-03-10 01:04:10 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:13.799003 | orchestrator | 2026-03-10 01:04:13 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:04:13.799645 | orchestrator | 2026-03-10 01:04:13 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:04:13.800484 | orchestrator | 2026-03-10 01:04:13 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:04:13.802578 | orchestrator | 2026-03-10 01:04:13 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:04:13.802645 | orchestrator | 2026-03-10 01:04:13 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:16.837742 | orchestrator | 2026-03-10 01:04:16 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:04:16.838775 | orchestrator | 2026-03-10 01:04:16 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:04:16.839986 | orchestrator | 2026-03-10 01:04:16 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:04:16.841316 | orchestrator | 2026-03-10 01:04:16 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:04:16.841351 | orchestrator | 2026-03-10 01:04:16 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:19.874943 | orchestrator | 2026-03-10 01:04:19 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:04:19.875554 | orchestrator | 2026-03-10 01:04:19 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:04:19.876539 | orchestrator | 2026-03-10 01:04:19 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:04:19.877273 | orchestrator | 2026-03-10 01:04:19 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:04:19.877289 | orchestrator | 2026-03-10 01:04:19 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:22.907341 | orchestrator | 2026-03-10 01:04:22 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:04:22.907588 | orchestrator | 2026-03-10 01:04:22 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:04:22.908680 | orchestrator | 2026-03-10 01:04:22 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:04:22.909538 | orchestrator | 2026-03-10 01:04:22 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:04:22.911566 | orchestrator | 2026-03-10 01:04:22 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:25.948777 | orchestrator | 2026-03-10 01:04:25 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:04:25.949195 | orchestrator | 2026-03-10 01:04:25 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:04:25.949749 | orchestrator | 2026-03-10 01:04:25 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:04:25.950889 | orchestrator | 2026-03-10 01:04:25 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:04:25.950956 | orchestrator | 2026-03-10 01:04:25 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:29.003857 | orchestrator | 2026-03-10 01:04:28 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:04:29.003962 | orchestrator | 2026-03-10 01:04:28 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:04:29.003976 | orchestrator | 2026-03-10 01:04:28 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:04:29.003987 | orchestrator | 2026-03-10 01:04:28 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:04:29.003997 | orchestrator | 2026-03-10 01:04:28 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:32.028039 | orchestrator | 2026-03-10 01:04:32 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:04:32.028483 | orchestrator | 2026-03-10 01:04:32 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:04:32.029535 | orchestrator | 2026-03-10 01:04:32 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:04:32.030989 | orchestrator | 2026-03-10 01:04:32 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:04:32.031025 | orchestrator | 2026-03-10 01:04:32 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:35.066818 | orchestrator | 2026-03-10 01:04:35 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:04:35.066991 | orchestrator | 2026-03-10 01:04:35 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:04:35.067900 | orchestrator | 2026-03-10 01:04:35 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:04:35.068661 | orchestrator | 2026-03-10 01:04:35 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:04:35.068721 | orchestrator | 2026-03-10 01:04:35 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:38.164024 | orchestrator | 2026-03-10 01:04:38 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:04:38.164143 | orchestrator | 2026-03-10 01:04:38 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:04:38.164158 | orchestrator | 2026-03-10 01:04:38 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:04:38.164171 | orchestrator | 2026-03-10 01:04:38 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:04:38.164183 | orchestrator | 2026-03-10 01:04:38 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:41.199911 | orchestrator | 2026-03-10 01:04:41 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:04:41.204335 | orchestrator | 2026-03-10 01:04:41 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:04:41.204431 | orchestrator | 2026-03-10 01:04:41 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:04:41.204439 | orchestrator | 2026-03-10 01:04:41 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:04:41.204445 | orchestrator | 2026-03-10 01:04:41 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:44.245785 | orchestrator | 2026-03-10 01:04:44 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:04:44.246867 | orchestrator | 2026-03-10 01:04:44 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:04:44.248011 | orchestrator | 2026-03-10 01:04:44 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:04:44.249163 | orchestrator | 2026-03-10 01:04:44 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:04:44.249235 | orchestrator | 2026-03-10 01:04:44 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:47.295774 | orchestrator | 2026-03-10 01:04:47 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:04:47.299111 | orchestrator | 2026-03-10 01:04:47 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:04:47.299155 | orchestrator | 2026-03-10 01:04:47 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:04:47.300515 | orchestrator | 2026-03-10 01:04:47 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:04:47.300562 | orchestrator | 2026-03-10 01:04:47 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:50.344867 | orchestrator | 2026-03-10 01:04:50 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:04:50.345888 | orchestrator | 2026-03-10 01:04:50 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:04:50.347466 | orchestrator | 2026-03-10 01:04:50 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:04:50.349089 | orchestrator | 2026-03-10 01:04:50 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:04:50.349136 | orchestrator | 2026-03-10 01:04:50 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:53.448801 | orchestrator | 2026-03-10 01:04:53 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:04:53.449112 | orchestrator | 2026-03-10 01:04:53 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:04:53.450175 | orchestrator | 2026-03-10 01:04:53 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:04:53.450988 | orchestrator | 2026-03-10 01:04:53 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:04:53.452587 | orchestrator | 2026-03-10 01:04:53 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:56.486059 | orchestrator | 2026-03-10 01:04:56 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:04:56.487144 | orchestrator | 2026-03-10 01:04:56 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:04:56.488115 | orchestrator | 2026-03-10 01:04:56 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:04:56.489044 | orchestrator | 2026-03-10 01:04:56 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:04:56.489268 | orchestrator | 2026-03-10 01:04:56 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:59.521575 | orchestrator | 2026-03-10 01:04:59 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:04:59.522188 | orchestrator | 2026-03-10 01:04:59 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:04:59.523641 | orchestrator | 2026-03-10 01:04:59 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:04:59.525259 | orchestrator | 2026-03-10 01:04:59 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:04:59.525339 | orchestrator | 2026-03-10 01:04:59 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:02.573871 | orchestrator | 2026-03-10 01:05:02 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:05:02.574686 | orchestrator | 2026-03-10 01:05:02 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:05:02.577302 | orchestrator | 2026-03-10 01:05:02 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:05:02.579109 | orchestrator | 2026-03-10 01:05:02 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:05:02.579151 | orchestrator | 2026-03-10 01:05:02 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:05.630829 | orchestrator | 2026-03-10 01:05:05 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:05:05.631544 | orchestrator | 2026-03-10 01:05:05 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:05:05.632691 | orchestrator | 2026-03-10 01:05:05 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:05:05.634257 | orchestrator | 2026-03-10 01:05:05 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:05:05.634289 | orchestrator | 2026-03-10 01:05:05 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:08.691229 | orchestrator | 2026-03-10 01:05:08 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:05:08.693985 | orchestrator | 2026-03-10 01:05:08 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:05:08.696858 | orchestrator | 2026-03-10 01:05:08 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:05:08.699298 | orchestrator | 2026-03-10 01:05:08 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:05:08.699850 | orchestrator | 2026-03-10 01:05:08 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:11.741630 | orchestrator | 2026-03-10 01:05:11 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:05:11.743924 | orchestrator | 2026-03-10 01:05:11 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:05:11.745835 | orchestrator | 2026-03-10 01:05:11 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:05:11.747851 | orchestrator | 2026-03-10 01:05:11 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:05:11.747879 | orchestrator | 2026-03-10 01:05:11 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:14.790623 | orchestrator | 2026-03-10 01:05:14 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:05:14.793190 | orchestrator | 2026-03-10 01:05:14 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:05:14.795081 | orchestrator | 2026-03-10 01:05:14 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:05:14.797352 | orchestrator | 2026-03-10 01:05:14 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:05:14.797422 | orchestrator | 2026-03-10 01:05:14 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:17.842925 | orchestrator | 2026-03-10 01:05:17 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:05:17.843085 | orchestrator | 2026-03-10 01:05:17 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:05:17.844623 | orchestrator | 2026-03-10 01:05:17 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:05:17.846335 | orchestrator | 2026-03-10 01:05:17 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:05:17.846405 | orchestrator | 2026-03-10 01:05:17 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:20.890104 | orchestrator | 2026-03-10 01:05:20 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:05:20.890307 | orchestrator | 2026-03-10 01:05:20 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:05:20.891147 | orchestrator | 2026-03-10 01:05:20 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:05:20.894166 | orchestrator | 2026-03-10 01:05:20 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:05:20.894201 | orchestrator | 2026-03-10 01:05:20 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:23.940515 | orchestrator | 2026-03-10 01:05:23 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:05:23.941432 | orchestrator | 2026-03-10 01:05:23 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:05:23.943112 | orchestrator | 2026-03-10 01:05:23 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:05:23.944445 | orchestrator | 2026-03-10 01:05:23 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:05:23.944490 | orchestrator | 2026-03-10 01:05:23 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:26.987684 | orchestrator | 2026-03-10 01:05:26 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:05:26.987774 | orchestrator | 2026-03-10 01:05:26 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:05:26.991237 | orchestrator | 2026-03-10 01:05:26 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:05:26.991496 | orchestrator | 2026-03-10 01:05:26 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:05:26.991534 | orchestrator | 2026-03-10 01:05:26 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:30.086799 | orchestrator | 2026-03-10 01:05:30 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:05:30.087137 | orchestrator | 2026-03-10 01:05:30 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:05:30.088250 | orchestrator | 2026-03-10 01:05:30 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:05:30.089057 | orchestrator | 2026-03-10 01:05:30 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:05:30.089083 | orchestrator | 2026-03-10 01:05:30 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:33.160039 | orchestrator | 2026-03-10 01:05:33 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:05:33.161468 | orchestrator | 2026-03-10 01:05:33 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:05:33.168439 | orchestrator | 2026-03-10 01:05:33 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:05:33.170861 | orchestrator | 2026-03-10 01:05:33 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:05:33.170942 | orchestrator | 2026-03-10 01:05:33 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:36.233974 | orchestrator | 2026-03-10 01:05:36 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:05:36.238592 | orchestrator | 2026-03-10 01:05:36 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:05:36.239295 | orchestrator | 2026-03-10 01:05:36 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:05:36.242845 | orchestrator | 2026-03-10 01:05:36 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:05:36.242904 | orchestrator | 2026-03-10 01:05:36 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:39.300325 | orchestrator | 2026-03-10 01:05:39 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:05:39.301114 | orchestrator | 2026-03-10 01:05:39 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:05:39.302628 | orchestrator | 2026-03-10 01:05:39 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:05:39.303479 | orchestrator | 2026-03-10 01:05:39 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:05:39.303626 | orchestrator | 2026-03-10 01:05:39 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:42.337714 | orchestrator | 2026-03-10 01:05:42 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:05:42.338066 | orchestrator | 2026-03-10 01:05:42 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:05:42.338732 | orchestrator | 2026-03-10 01:05:42 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:05:42.340608 | orchestrator | 2026-03-10 01:05:42 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:05:42.340635 | orchestrator | 2026-03-10 01:05:42 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:45.445523 | orchestrator | 2026-03-10 01:05:45 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:05:45.445621 | orchestrator | 2026-03-10 01:05:45 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:05:45.445632 | orchestrator | 2026-03-10 01:05:45 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:05:45.445641 | orchestrator | 2026-03-10 01:05:45 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:05:45.445650 | orchestrator | 2026-03-10 01:05:45 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:48.512075 | orchestrator | 2026-03-10 01:05:48 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:05:48.512158 | orchestrator | 2026-03-10 01:05:48 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:05:48.512167 | orchestrator | 2026-03-10 01:05:48 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:05:48.512174 | orchestrator | 2026-03-10 01:05:48 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:05:48.512182 | orchestrator | 2026-03-10 01:05:48 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:51.564535 | orchestrator | 2026-03-10 01:05:51 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:05:51.564732 | orchestrator | 2026-03-10 01:05:51 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:05:51.565998 | orchestrator | 2026-03-10 01:05:51 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:05:51.566819 | orchestrator | 2026-03-10 01:05:51 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:05:51.566869 | orchestrator | 2026-03-10 01:05:51 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:54.609827 | orchestrator | 2026-03-10 01:05:54 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:05:54.610489 | orchestrator | 2026-03-10 01:05:54 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:05:54.611578 | orchestrator | 2026-03-10 01:05:54 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:05:54.612912 | orchestrator | 2026-03-10 01:05:54 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:05:54.612953 | orchestrator | 2026-03-10 01:05:54 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:57.690256 | orchestrator | 2026-03-10 01:05:57 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:05:57.690972 | orchestrator | 2026-03-10 01:05:57 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:05:57.693338 | orchestrator | 2026-03-10 01:05:57 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:05:57.695128 | orchestrator | 2026-03-10 01:05:57 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:05:57.695164 | orchestrator | 2026-03-10 01:05:57 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:00.729794 | orchestrator | 2026-03-10 01:06:00 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:06:00.732195 | orchestrator | 2026-03-10 01:06:00 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:06:00.732493 | orchestrator | 2026-03-10 01:06:00 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:06:00.734481 | orchestrator | 2026-03-10 01:06:00 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:06:00.734522 | orchestrator | 2026-03-10 01:06:00 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:03.775581 | orchestrator | 2026-03-10 01:06:03 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:06:03.783227 | orchestrator | 2026-03-10 01:06:03 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:06:03.784250 | orchestrator | 2026-03-10 01:06:03 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:06:03.787619 | orchestrator | 2026-03-10 01:06:03 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:06:03.787649 | orchestrator | 2026-03-10 01:06:03 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:06.846499 | orchestrator | 2026-03-10 01:06:06 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:06:06.846652 | orchestrator | 2026-03-10 01:06:06 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:06:06.846673 | orchestrator | 2026-03-10 01:06:06 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:06:06.846691 | orchestrator | 2026-03-10 01:06:06 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:06:06.846708 | orchestrator | 2026-03-10 01:06:06 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:09.866994 | orchestrator | 2026-03-10 01:06:09 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:06:09.868214 | orchestrator | 2026-03-10 01:06:09 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:06:09.869519 | orchestrator | 2026-03-10 01:06:09 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:06:09.870671 | orchestrator | 2026-03-10 01:06:09 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state STARTED 2026-03-10 01:06:09.870794 | orchestrator | 2026-03-10 01:06:09 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:12.905548 | orchestrator | 2026-03-10 01:06:12 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:06:12.906139 | orchestrator | 2026-03-10 01:06:12 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:06:12.908423 | orchestrator | 2026-03-10 01:06:12 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:06:12.909660 | orchestrator | 2026-03-10 01:06:12 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state STARTED 2026-03-10 01:06:12.913022 | orchestrator | 2026-03-10 01:06:12 | INFO  | Task 19b1eed0-316d-49c2-9cbf-b7bcdc527091 is in state SUCCESS 2026-03-10 01:06:12.914449 | orchestrator | 2026-03-10 01:06:12.914491 | orchestrator | 2026-03-10 01:06:12.914497 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 01:06:12.914502 | orchestrator | 2026-03-10 01:06:12.914507 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 01:06:12.914511 | orchestrator | Tuesday 10 March 2026 01:02:29 +0000 (0:00:00.359) 0:00:00.359 ********* 2026-03-10 01:06:12.914516 | orchestrator | ok: [testbed-manager] 2026-03-10 01:06:12.914521 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:06:12.914525 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:06:12.914529 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:06:12.914533 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:06:12.914537 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:06:12.914541 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:06:12.914545 | orchestrator | 2026-03-10 01:06:12.914549 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 01:06:12.914553 | orchestrator | Tuesday 10 March 2026 01:02:30 +0000 (0:00:00.897) 0:00:01.256 ********* 2026-03-10 01:06:12.914557 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-10 01:06:12.914562 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-10 01:06:12.914565 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-10 01:06:12.914569 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-10 01:06:12.914573 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-10 01:06:12.914576 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-10 01:06:12.914580 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-10 01:06:12.914584 | orchestrator | 2026-03-10 01:06:12.914588 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-10 01:06:12.914592 | orchestrator | 2026-03-10 01:06:12.914596 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-10 01:06:12.914600 | orchestrator | Tuesday 10 March 2026 01:02:30 +0000 (0:00:00.874) 0:00:02.131 ********* 2026-03-10 01:06:12.914604 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:06:12.914610 | orchestrator | 2026-03-10 01:06:12.914666 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-10 01:06:12.914672 | orchestrator | Tuesday 10 March 2026 01:02:32 +0000 (0:00:01.705) 0:00:03.837 ********* 2026-03-10 01:06:12.914678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:06:12.914685 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:06:12.914703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:06:12.914708 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-10 01:06:12.914736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:06:12.914741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.914749 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.914785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.914791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.914799 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:06:12.914804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.914823 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.914830 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:06:12.914834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.914839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.914846 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:06:12.914850 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.914858 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.914862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.914869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.914874 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.914901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.914933 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.914946 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-10 01:06:12.914954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.914961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.914972 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.914979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.914985 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.914991 | orchestrator | 2026-03-10 01:06:12.914998 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-10 01:06:12.915008 | orchestrator | Tuesday 10 March 2026 01:02:35 +0000 (0:00:03.027) 0:00:06.864 ********* 2026-03-10 01:06:12.915015 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:06:12.915031 | orchestrator | 2026-03-10 01:06:12.915038 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-10 01:06:12.915044 | orchestrator | Tuesday 10 March 2026 01:02:37 +0000 (0:00:01.516) 0:00:08.381 ********* 2026-03-10 01:06:12.915051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:06:12.915058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:06:12.915064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:06:12.915072 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:06:12.915083 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-10 01:06:12.915091 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:06:12.915103 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:06:12.915116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.915124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.915131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.915138 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.915151 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:06:12.915158 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.915166 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.915185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.915194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.915203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.915210 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.915218 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.915230 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.915238 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.915253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.915262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.915299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.915308 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-10 01:06:12.915562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.915583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.915587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.915605 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.915612 | orchestrator | 2026-03-10 01:06:12.915618 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-10 01:06:12.915624 | orchestrator | Tuesday 10 March 2026 01:02:43 +0000 (0:00:06.761) 0:00:15.142 ********* 2026-03-10 01:06:12.915631 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-10 01:06:12.915638 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 01:06:12.915645 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 01:06:12.915659 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-10 01:06:12.915672 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:06:12.915680 | orchestrator | skipping: [testbed-manager] 2026-03-10 01:06:12.915688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 01:06:12.915692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:06:12.915696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:06:12.915722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 01:06:12.915727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:06:12.915735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 01:06:12.915743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:06:12.915748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:06:12.915758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 01:06:12.915766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:06:12.915816 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 01:06:12.915824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 01:06:12.915831 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 01:06:12.915843 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-10 01:06:12.915855 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:06:12.915908 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:06:12.915912 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:06:12.915917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:06:12.915924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:06:12.915928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 01:06:12.915932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:06:12.915938 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:06:12.915944 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 01:06:12.915950 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 01:06:12.915966 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-10 01:06:12.915972 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:06:12.915979 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 01:06:12.915986 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 01:06:12.915993 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-10 01:06:12.915999 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:06:12.916005 | orchestrator | 2026-03-10 01:06:12.916035 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-10 01:06:12.916040 | orchestrator | Tuesday 10 March 2026 01:02:46 +0000 (0:00:02.202) 0:00:17.345 ********* 2026-03-10 01:06:12.916044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 01:06:12.916065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:06:12.916072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:06:12.916113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 01:06:12.916125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:06:12.916133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 01:06:12.916165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:06:12.916172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:06:12.916180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 01:06:12.916186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:06:12.916206 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-10 01:06:12.916213 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 01:06:12.916219 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 01:06:12.916231 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-10 01:06:12.916236 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:06:12.916243 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:06:12.916249 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:06:12.916255 | orchestrator | skipping: [testbed-manager] 2026-03-10 01:06:12.916260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 01:06:12.916268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:06:12.916278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:06:12.916284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 01:06:12.916292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:06:12.916299 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:06:12.916310 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 01:06:12.916315 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 01:06:12.916320 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-10 01:06:12.916330 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:06:12.916337 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 01:06:12.916346 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 01:06:12.916927 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-10 01:06:12.916962 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:06:12.916971 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 01:06:12.916985 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 01:06:12.916993 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-10 01:06:12.917000 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:06:12.917007 | orchestrator | 2026-03-10 01:06:12.917014 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-10 01:06:12.917021 | orchestrator | Tuesday 10 March 2026 01:02:48 +0000 (0:00:02.284) 0:00:19.630 ********* 2026-03-10 01:06:12.917028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:06:12.917044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:06:12.917058 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-10 01:06:12.917067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:06:12.917073 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:06:12.917083 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:06:12.917090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.917097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.917108 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:06:12.917115 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:06:12.917125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.917132 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.917140 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.917148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.917155 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.917166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.917173 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.917180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.917191 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.917265 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-10 01:06:12.917277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.917289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.917296 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.917303 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.917313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.917320 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.917328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.917338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.917345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.917418 | orchestrator | 2026-03-10 01:06:12.917426 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-10 01:06:12.917432 | orchestrator | Tuesday 10 March 2026 01:02:55 +0000 (0:00:07.057) 0:00:26.688 ********* 2026-03-10 01:06:12.917438 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-10 01:06:12.917445 | orchestrator | 2026-03-10 01:06:12.917451 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-10 01:06:12.917458 | orchestrator | Tuesday 10 March 2026 01:02:57 +0000 (0:00:01.521) 0:00:28.209 ********* 2026-03-10 01:06:12.917465 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1097477, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.969653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917473 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1097477, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.969653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917484 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1097477, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.969653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917491 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1097538, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.985077, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917498 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1097477, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.969653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 01:06:12.917516 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1097538, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.985077, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917523 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1097477, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.969653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917530 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1097477, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.969653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917536 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1097473, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9690006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917546 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1097538, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.985077, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917553 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1097477, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.969653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917560 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1097538, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.985077, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917577 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1097538, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.985077, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917584 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1097488, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9830008, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917591 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1097473, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9690006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917598 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1097473, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9690006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917609 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1097538, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.985077, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 01:06:12.917616 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1097473, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9690006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917623 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1097473, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9690006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917638 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1097538, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.985077, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917645 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1097488, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9830008, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917652 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1097471, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9681208, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917658 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1097471, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9681208, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917668 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097478, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.969786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917676 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1097473, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9690006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917682 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1097488, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9830008, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917696 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1097488, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9830008, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917703 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1097486, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9710007, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917710 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097478, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.969786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917717 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1097488, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9830008, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917726 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097481, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9700506, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917733 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1097488, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9830008, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917743 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1097471, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9681208, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917753 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1097486, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9710007, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917760 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1097471, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9681208, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917767 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1097475, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9690006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917774 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097478, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.969786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917780 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1097473, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9690006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 01:06:12.917791 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1097471, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9681208, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917802 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097481, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9700506, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917812 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1097471, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9681208, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917818 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097536, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.984385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917825 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1097475, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9690006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917832 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097478, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.969786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917839 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1097486, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9710007, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917849 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097478, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.969786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917860 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1097486, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9710007, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917869 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097481, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9700506, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917876 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097468, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9673376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917883 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097478, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.969786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917889 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097536, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.984385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917896 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1097545, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.987001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917911 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1097535, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9840713, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917918 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097481, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9700506, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917930 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1097486, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9710007, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917937 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1097486, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9710007, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917944 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097468, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9673376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917951 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097472, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9683506, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917958 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1097488, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9830008, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 01:06:12.917973 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1097475, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9690006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917980 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1097475, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9690006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917990 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097481, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9700506, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.917997 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1097545, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.987001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918004 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097481, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9700506, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918011 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1097535, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9840713, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918065 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097536, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.984385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918083 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1097475, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9690006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918091 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1097470, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9678664, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918101 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097536, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.984385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918109 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1097475, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9690006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918115 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1097485, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9706664, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918123 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097472, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9683506, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918129 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097468, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9673376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918146 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097536, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.984385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918154 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1097471, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9681208, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 01:06:12.918164 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097483, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9701953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918170 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097536, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.984385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918177 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1097545, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.987001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918184 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097468, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9673376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918195 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097468, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9673376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918205 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'is2026-03-10 01:06:12 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:12.918438 | orchestrator | blk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1097535, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9840713, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918484 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1097544, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9857028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918491 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:06:12.918507 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1097470, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9678664, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918514 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1097545, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.987001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918523 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097468, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9673376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918531 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1097545, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.987001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918552 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097472, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9683506, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918569 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1097485, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9706664, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918575 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1097535, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9840713, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918585 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1097545, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.987001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918592 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1097535, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9840713, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918598 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097478, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.969786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 01:06:12.918610 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097483, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9701953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918616 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1097470, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9678664, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918627 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097472, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9683506, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918633 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097472, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9683506, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918642 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1097535, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9840713, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918649 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1097485, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9706664, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918655 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1097544, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9857028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918666 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:06:12.918672 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097483, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9701953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918678 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1097486, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9710007, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 01:06:12.918687 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1097470, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9678664, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918694 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1097470, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9678664, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918703 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097472, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9683506, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918709 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1097544, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9857028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918715 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:06:12.918721 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1097485, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9706664, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918731 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1097485, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9706664, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918737 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097483, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9701953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918747 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1097470, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9678664, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918753 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1097544, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9857028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918759 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:06:12.918768 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097481, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9700506, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 01:06:12.918775 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097483, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9701953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918785 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1097485, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9706664, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918791 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1097544, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9857028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918797 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:06:12.918803 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097483, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9701953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918814 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1097544, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9857028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:06:12.918820 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:06:12.918826 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1097475, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9690006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 01:06:12.918834 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097536, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.984385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 01:06:12.918841 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097468, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9673376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 01:06:12.918852 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1097545, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.987001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 01:06:12.918858 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1097535, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9840713, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 01:06:12.918863 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097472, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9683506, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 01:06:12.918874 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1097470, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9678664, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 01:06:12.918880 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1097485, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9706664, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 01:06:12.918890 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097483, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9701953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 01:06:12.918901 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1097544, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9857028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 01:06:12.918908 | orchestrator | 2026-03-10 01:06:12.918915 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-10 01:06:12.918922 | orchestrator | Tuesday 10 March 2026 01:03:32 +0000 (0:00:35.913) 0:01:04.123 ********* 2026-03-10 01:06:12.918928 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-10 01:06:12.918934 | orchestrator | 2026-03-10 01:06:12.918940 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-10 01:06:12.918946 | orchestrator | Tuesday 10 March 2026 01:03:33 +0000 (0:00:00.893) 0:01:05.017 ********* 2026-03-10 01:06:12.918952 | orchestrator | [WARNING]: Skipped 2026-03-10 01:06:12.918959 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-10 01:06:12.918966 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-10 01:06:12.918972 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-10 01:06:12.918978 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-10 01:06:12.918984 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-10 01:06:12.918990 | orchestrator | [WARNING]: Skipped 2026-03-10 01:06:12.918996 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-10 01:06:12.919002 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-10 01:06:12.919008 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-10 01:06:12.919014 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-10 01:06:12.919020 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-10 01:06:12.919026 | orchestrator | [WARNING]: Skipped 2026-03-10 01:06:12.919032 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-10 01:06:12.919039 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-10 01:06:12.919046 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-10 01:06:12.919052 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-10 01:06:12.919058 | orchestrator | [WARNING]: Skipped 2026-03-10 01:06:12.919064 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-10 01:06:12.919071 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-10 01:06:12.919077 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-10 01:06:12.919084 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-10 01:06:12.919090 | orchestrator | [WARNING]: Skipped 2026-03-10 01:06:12.919096 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-10 01:06:12.919106 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-10 01:06:12.919113 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-10 01:06:12.919120 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-10 01:06:12.919128 | orchestrator | [WARNING]: Skipped 2026-03-10 01:06:12.919134 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-10 01:06:12.919141 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-10 01:06:12.919147 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-10 01:06:12.919153 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-10 01:06:12.919169 | orchestrator | [WARNING]: Skipped 2026-03-10 01:06:12.919175 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-10 01:06:12.919181 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-10 01:06:12.919186 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-10 01:06:12.919192 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-10 01:06:12.919199 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-10 01:06:12.919205 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-10 01:06:12.919211 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-10 01:06:12.919216 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-10 01:06:12.919223 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-10 01:06:12.919229 | orchestrator | 2026-03-10 01:06:12.919235 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-10 01:06:12.919241 | orchestrator | Tuesday 10 March 2026 01:03:36 +0000 (0:00:02.885) 0:01:07.903 ********* 2026-03-10 01:06:12.919247 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-10 01:06:12.919253 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-10 01:06:12.919262 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-10 01:06:12.919268 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:06:12.919274 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:06:12.919280 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:06:12.919286 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-10 01:06:12.919292 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:06:12.919298 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-10 01:06:12.919304 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:06:12.919310 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-10 01:06:12.919316 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:06:12.919322 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-10 01:06:12.919328 | orchestrator | 2026-03-10 01:06:12.919334 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-10 01:06:12.919340 | orchestrator | Tuesday 10 March 2026 01:03:59 +0000 (0:00:22.519) 0:01:30.422 ********* 2026-03-10 01:06:12.919345 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-10 01:06:12.919352 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:06:12.919380 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-10 01:06:12.919387 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-10 01:06:12.919393 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:06:12.919399 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:06:12.919405 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-10 01:06:12.919410 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:06:12.919417 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-10 01:06:12.919422 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:06:12.919429 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-10 01:06:12.919434 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:06:12.919440 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-10 01:06:12.919446 | orchestrator | 2026-03-10 01:06:12.919452 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-10 01:06:12.919465 | orchestrator | Tuesday 10 March 2026 01:04:03 +0000 (0:00:04.584) 0:01:35.006 ********* 2026-03-10 01:06:12.919471 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-10 01:06:12.919478 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-10 01:06:12.919483 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-10 01:06:12.919489 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:06:12.919495 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:06:12.919501 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:06:12.919511 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-10 01:06:12.919517 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:06:12.919523 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-10 01:06:12.919529 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-10 01:06:12.919535 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:06:12.919542 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-10 01:06:12.919547 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:06:12.919554 | orchestrator | 2026-03-10 01:06:12.919560 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-10 01:06:12.919566 | orchestrator | Tuesday 10 March 2026 01:04:07 +0000 (0:00:03.400) 0:01:38.406 ********* 2026-03-10 01:06:12.919572 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-10 01:06:12.919578 | orchestrator | 2026-03-10 01:06:12.919585 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-10 01:06:12.919591 | orchestrator | Tuesday 10 March 2026 01:04:08 +0000 (0:00:01.453) 0:01:39.860 ********* 2026-03-10 01:06:12.919597 | orchestrator | skipping: [testbed-manager] 2026-03-10 01:06:12.919603 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:06:12.919609 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:06:12.919616 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:06:12.919622 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:06:12.919628 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:06:12.919634 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:06:12.919640 | orchestrator | 2026-03-10 01:06:12.919645 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-10 01:06:12.919649 | orchestrator | Tuesday 10 March 2026 01:04:09 +0000 (0:00:00.913) 0:01:40.773 ********* 2026-03-10 01:06:12.919653 | orchestrator | skipping: [testbed-manager] 2026-03-10 01:06:12.919657 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:06:12.919667 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:06:12.919673 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:06:12.919679 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:06:12.919685 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:06:12.919691 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:06:12.919698 | orchestrator | 2026-03-10 01:06:12.919704 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-10 01:06:12.919711 | orchestrator | Tuesday 10 March 2026 01:04:13 +0000 (0:00:04.002) 0:01:44.775 ********* 2026-03-10 01:06:12.919718 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-10 01:06:12.919724 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-10 01:06:12.919730 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:06:12.919744 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-10 01:06:12.919750 | orchestrator | skipping: [testbed-manager] 2026-03-10 01:06:12.919756 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:06:12.919762 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-10 01:06:12.919767 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:06:12.919773 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-10 01:06:12.919779 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:06:12.919785 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-10 01:06:12.919792 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:06:12.919798 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-10 01:06:12.919804 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:06:12.919811 | orchestrator | 2026-03-10 01:06:12.919817 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-10 01:06:12.919823 | orchestrator | Tuesday 10 March 2026 01:04:16 +0000 (0:00:03.126) 0:01:47.902 ********* 2026-03-10 01:06:12.919830 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-10 01:06:12.919836 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:06:12.919842 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-10 01:06:12.919848 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:06:12.919852 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-10 01:06:12.919855 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:06:12.919859 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-10 01:06:12.919863 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:06:12.919867 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-10 01:06:12.919871 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:06:12.919874 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-10 01:06:12.919878 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:06:12.919882 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-10 01:06:12.919886 | orchestrator | 2026-03-10 01:06:12.919890 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-10 01:06:12.919898 | orchestrator | Tuesday 10 March 2026 01:04:19 +0000 (0:00:02.858) 0:01:50.761 ********* 2026-03-10 01:06:12.919902 | orchestrator | [WARNING]: Skipped 2026-03-10 01:06:12.919906 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-10 01:06:12.919910 | orchestrator | due to this access issue: 2026-03-10 01:06:12.919914 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-10 01:06:12.919918 | orchestrator | not a directory 2026-03-10 01:06:12.919922 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-10 01:06:12.919926 | orchestrator | 2026-03-10 01:06:12.919930 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-10 01:06:12.919933 | orchestrator | Tuesday 10 March 2026 01:04:21 +0000 (0:00:01.882) 0:01:52.644 ********* 2026-03-10 01:06:12.919937 | orchestrator | skipping: [testbed-manager] 2026-03-10 01:06:12.919941 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:06:12.919945 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:06:12.919949 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:06:12.919952 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:06:12.919961 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:06:12.919965 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:06:12.919969 | orchestrator | 2026-03-10 01:06:12.919973 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-10 01:06:12.919977 | orchestrator | Tuesday 10 March 2026 01:04:23 +0000 (0:00:01.787) 0:01:54.432 ********* 2026-03-10 01:06:12.919981 | orchestrator | skipping: [testbed-manager] 2026-03-10 01:06:12.919984 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:06:12.919988 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:06:12.919992 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:06:12.919996 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:06:12.920000 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:06:12.920004 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:06:12.920008 | orchestrator | 2026-03-10 01:06:12.920012 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-03-10 01:06:12.920016 | orchestrator | Tuesday 10 March 2026 01:04:24 +0000 (0:00:01.383) 0:01:55.815 ********* 2026-03-10 01:06:12.920024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:06:12.920029 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-10 01:06:12.920034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:06:12.920038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:06:12.920047 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:06:12.920055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.920059 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:06:12.920067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.920071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.920075 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:06:12.920079 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.920083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.920092 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.920099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.920103 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:06:12.920110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.920114 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.920119 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.920123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.920131 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-10 01:06:12.920140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.920147 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.920151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.920155 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.920159 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.920163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.920175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.920179 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-10 01:06:12.920183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:06:12.920187 | orchestrator | 2026-03-10 01:06:12.920191 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-10 01:06:12.920195 | orchestrator | Tuesday 10 March 2026 01:04:29 +0000 (0:00:05.253) 0:02:01.069 ********* 2026-03-10 01:06:12.920199 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-10 01:06:12.920203 | orchestrator | skipping: [testbed-manager] 2026-03-10 01:06:12.920207 | orchestrator | 2026-03-10 01:06:12.920214 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-10 01:06:12.920218 | orchestrator | Tuesday 10 March 2026 01:04:31 +0000 (0:00:01.801) 0:02:02.870 ********* 2026-03-10 01:06:12.920222 | orchestrator | 2026-03-10 01:06:12.920226 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-10 01:06:12.920230 | orchestrator | Tuesday 10 March 2026 01:04:31 +0000 (0:00:00.175) 0:02:03.045 ********* 2026-03-10 01:06:12.920234 | orchestrator | 2026-03-10 01:06:12.920238 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-10 01:06:12.920242 | orchestrator | Tuesday 10 March 2026 01:04:31 +0000 (0:00:00.144) 0:02:03.190 ********* 2026-03-10 01:06:12.920246 | orchestrator | 2026-03-10 01:06:12.920249 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-10 01:06:12.920253 | orchestrator | Tuesday 10 March 2026 01:04:32 +0000 (0:00:00.180) 0:02:03.371 ********* 2026-03-10 01:06:12.920257 | orchestrator | 2026-03-10 01:06:12.920261 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-10 01:06:12.920265 | orchestrator | Tuesday 10 March 2026 01:04:32 +0000 (0:00:00.474) 0:02:03.846 ********* 2026-03-10 01:06:12.920268 | orchestrator | 2026-03-10 01:06:12.920272 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-10 01:06:12.920276 | orchestrator | Tuesday 10 March 2026 01:04:32 +0000 (0:00:00.081) 0:02:03.928 ********* 2026-03-10 01:06:12.920280 | orchestrator | 2026-03-10 01:06:12.920284 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-10 01:06:12.920287 | orchestrator | Tuesday 10 March 2026 01:04:32 +0000 (0:00:00.104) 0:02:04.033 ********* 2026-03-10 01:06:12.920291 | orchestrator | 2026-03-10 01:06:12.920295 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-10 01:06:12.920299 | orchestrator | Tuesday 10 March 2026 01:04:32 +0000 (0:00:00.092) 0:02:04.125 ********* 2026-03-10 01:06:12.920309 | orchestrator | changed: [testbed-manager] 2026-03-10 01:06:12.920313 | orchestrator | 2026-03-10 01:06:12.920317 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-10 01:06:12.920321 | orchestrator | Tuesday 10 March 2026 01:04:49 +0000 (0:00:16.879) 0:02:21.005 ********* 2026-03-10 01:06:12.920324 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:06:12.920328 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:06:12.920332 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:06:12.920335 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:06:12.920339 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:06:12.920343 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:06:12.920347 | orchestrator | changed: [testbed-manager] 2026-03-10 01:06:12.920351 | orchestrator | 2026-03-10 01:06:12.920376 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-10 01:06:12.920381 | orchestrator | Tuesday 10 March 2026 01:05:05 +0000 (0:00:15.927) 0:02:36.932 ********* 2026-03-10 01:06:12.920386 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:06:12.920389 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:06:12.920393 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:06:12.920397 | orchestrator | 2026-03-10 01:06:12.920401 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-10 01:06:12.920405 | orchestrator | Tuesday 10 March 2026 01:05:16 +0000 (0:00:10.342) 0:02:47.275 ********* 2026-03-10 01:06:12.920409 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:06:12.920413 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:06:12.920417 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:06:12.920421 | orchestrator | 2026-03-10 01:06:12.920424 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-10 01:06:12.920428 | orchestrator | Tuesday 10 March 2026 01:05:26 +0000 (0:00:10.478) 0:02:57.753 ********* 2026-03-10 01:06:12.920432 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:06:12.920436 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:06:12.920440 | orchestrator | changed: [testbed-manager] 2026-03-10 01:06:12.920444 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:06:12.920451 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:06:12.920455 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:06:12.920459 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:06:12.920463 | orchestrator | 2026-03-10 01:06:12.920467 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-10 01:06:12.920471 | orchestrator | Tuesday 10 March 2026 01:05:40 +0000 (0:00:13.945) 0:03:11.698 ********* 2026-03-10 01:06:12.920475 | orchestrator | changed: [testbed-manager] 2026-03-10 01:06:12.920478 | orchestrator | 2026-03-10 01:06:12.920482 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-10 01:06:12.920486 | orchestrator | Tuesday 10 March 2026 01:05:52 +0000 (0:00:12.434) 0:03:24.133 ********* 2026-03-10 01:06:12.920490 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:06:12.920493 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:06:12.920497 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:06:12.920501 | orchestrator | 2026-03-10 01:06:12.920505 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-10 01:06:12.920509 | orchestrator | Tuesday 10 March 2026 01:05:59 +0000 (0:00:06.942) 0:03:31.076 ********* 2026-03-10 01:06:12.920513 | orchestrator | changed: [testbed-manager] 2026-03-10 01:06:12.920516 | orchestrator | 2026-03-10 01:06:12.920520 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-10 01:06:12.920524 | orchestrator | Tuesday 10 March 2026 01:06:05 +0000 (0:00:05.534) 0:03:36.610 ********* 2026-03-10 01:06:12.920528 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:06:12.920532 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:06:12.920535 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:06:12.920539 | orchestrator | 2026-03-10 01:06:12.920543 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:06:12.920551 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-10 01:06:12.920557 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-10 01:06:12.920565 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-10 01:06:12.920569 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-10 01:06:12.920573 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-10 01:06:12.920577 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-10 01:06:12.920581 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-10 01:06:12.920584 | orchestrator | 2026-03-10 01:06:12.920588 | orchestrator | 2026-03-10 01:06:12.920593 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:06:12.920596 | orchestrator | Tuesday 10 March 2026 01:06:11 +0000 (0:00:05.716) 0:03:42.326 ********* 2026-03-10 01:06:12.920600 | orchestrator | =============================================================================== 2026-03-10 01:06:12.920604 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 35.91s 2026-03-10 01:06:12.920608 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 22.52s 2026-03-10 01:06:12.920612 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 16.88s 2026-03-10 01:06:12.920615 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 15.93s 2026-03-10 01:06:12.920619 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.95s 2026-03-10 01:06:12.920623 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 12.43s 2026-03-10 01:06:12.920627 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.48s 2026-03-10 01:06:12.920631 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.34s 2026-03-10 01:06:12.920636 | orchestrator | prometheus : Copying over config.json files ----------------------------- 7.06s 2026-03-10 01:06:12.920639 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 6.94s 2026-03-10 01:06:12.920643 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.76s 2026-03-10 01:06:12.920648 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 5.72s 2026-03-10 01:06:12.920651 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.53s 2026-03-10 01:06:12.920655 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.25s 2026-03-10 01:06:12.920659 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.58s 2026-03-10 01:06:12.920665 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 4.00s 2026-03-10 01:06:12.920672 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 3.40s 2026-03-10 01:06:12.920678 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 3.13s 2026-03-10 01:06:12.920685 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.03s 2026-03-10 01:06:12.920691 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.89s 2026-03-10 01:06:15.959344 | orchestrator | 2026-03-10 01:06:15 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:06:15.960291 | orchestrator | 2026-03-10 01:06:15 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:06:15.961000 | orchestrator | 2026-03-10 01:06:15 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:06:15.962947 | orchestrator | 2026-03-10 01:06:15 | INFO  | Task 299d9877-c284-4378-8fa8-51c3e4c5fe62 is in state SUCCESS 2026-03-10 01:06:15.965165 | orchestrator | 2026-03-10 01:06:15.965218 | orchestrator | 2026-03-10 01:06:15.965224 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 01:06:15.965229 | orchestrator | 2026-03-10 01:06:15.965233 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 01:06:15.965238 | orchestrator | Tuesday 10 March 2026 01:02:37 +0000 (0:00:00.441) 0:00:00.441 ********* 2026-03-10 01:06:15.965242 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:06:15.965248 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:06:15.965252 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:06:15.965256 | orchestrator | 2026-03-10 01:06:15.965260 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 01:06:15.965264 | orchestrator | Tuesday 10 March 2026 01:02:38 +0000 (0:00:00.616) 0:00:01.058 ********* 2026-03-10 01:06:15.965269 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-10 01:06:15.965279 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-10 01:06:15.965283 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-10 01:06:15.965288 | orchestrator | 2026-03-10 01:06:15.965294 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-10 01:06:15.965301 | orchestrator | 2026-03-10 01:06:15.965307 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-10 01:06:15.965310 | orchestrator | Tuesday 10 March 2026 01:02:38 +0000 (0:00:00.775) 0:00:01.833 ********* 2026-03-10 01:06:15.965326 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:06:15.965331 | orchestrator | 2026-03-10 01:06:15.965334 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-03-10 01:06:15.965338 | orchestrator | Tuesday 10 March 2026 01:02:39 +0000 (0:00:00.877) 0:00:02.710 ********* 2026-03-10 01:06:15.965342 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-10 01:06:15.965345 | orchestrator | 2026-03-10 01:06:15.965349 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-03-10 01:06:15.965366 | orchestrator | Tuesday 10 March 2026 01:02:44 +0000 (0:00:04.682) 0:00:07.393 ********* 2026-03-10 01:06:15.965370 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-10 01:06:15.965375 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-10 01:06:15.965379 | orchestrator | 2026-03-10 01:06:15.965383 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-10 01:06:15.965387 | orchestrator | Tuesday 10 March 2026 01:02:52 +0000 (0:00:07.859) 0:00:15.252 ********* 2026-03-10 01:06:15.965391 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-10 01:06:15.965395 | orchestrator | 2026-03-10 01:06:15.965399 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-10 01:06:15.965402 | orchestrator | Tuesday 10 March 2026 01:02:56 +0000 (0:00:04.224) 0:00:19.477 ********* 2026-03-10 01:06:15.965407 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-10 01:06:15.965411 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-10 01:06:15.965415 | orchestrator | 2026-03-10 01:06:15.965418 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-10 01:06:15.965422 | orchestrator | Tuesday 10 March 2026 01:03:00 +0000 (0:00:04.282) 0:00:23.760 ********* 2026-03-10 01:06:15.965426 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-10 01:06:15.965430 | orchestrator | 2026-03-10 01:06:15.965433 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-03-10 01:06:15.965450 | orchestrator | Tuesday 10 March 2026 01:03:06 +0000 (0:00:05.088) 0:00:28.848 ********* 2026-03-10 01:06:15.965455 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-10 01:06:15.965458 | orchestrator | 2026-03-10 01:06:15.965462 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-10 01:06:15.965466 | orchestrator | Tuesday 10 March 2026 01:03:11 +0000 (0:00:05.046) 0:00:33.895 ********* 2026-03-10 01:06:15.965484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-10 01:06:15.965494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-10 01:06:15.965499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-10 01:06:15.965507 | orchestrator | 2026-03-10 01:06:15.965511 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-10 01:06:15.965515 | orchestrator | Tuesday 10 March 2026 01:03:20 +0000 (0:00:09.106) 0:00:43.002 ********* 2026-03-10 01:06:15.965519 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:06:15.965523 | orchestrator | 2026-03-10 01:06:15.965529 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-10 01:06:15.965533 | orchestrator | Tuesday 10 March 2026 01:03:20 +0000 (0:00:00.742) 0:00:43.745 ********* 2026-03-10 01:06:15.965537 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:06:15.965541 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:06:15.965545 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:06:15.965549 | orchestrator | 2026-03-10 01:06:15.965553 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-10 01:06:15.965557 | orchestrator | Tuesday 10 March 2026 01:03:24 +0000 (0:00:03.910) 0:00:47.655 ********* 2026-03-10 01:06:15.965561 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-10 01:06:15.965565 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-10 01:06:15.965569 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-10 01:06:15.965573 | orchestrator | 2026-03-10 01:06:15.965580 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-10 01:06:15.965586 | orchestrator | Tuesday 10 March 2026 01:03:26 +0000 (0:00:01.562) 0:00:49.217 ********* 2026-03-10 01:06:15.965591 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-10 01:06:15.965601 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-10 01:06:15.965607 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-10 01:06:15.965612 | orchestrator | 2026-03-10 01:06:15.965618 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-10 01:06:15.965623 | orchestrator | Tuesday 10 March 2026 01:03:27 +0000 (0:00:01.341) 0:00:50.559 ********* 2026-03-10 01:06:15.965629 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:06:15.965640 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:06:15.965652 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:06:15.965658 | orchestrator | 2026-03-10 01:06:15.965663 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-10 01:06:15.965669 | orchestrator | Tuesday 10 March 2026 01:03:28 +0000 (0:00:00.770) 0:00:51.330 ********* 2026-03-10 01:06:15.965674 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:06:15.965680 | orchestrator | 2026-03-10 01:06:15.965686 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-10 01:06:15.965693 | orchestrator | Tuesday 10 March 2026 01:03:28 +0000 (0:00:00.144) 0:00:51.475 ********* 2026-03-10 01:06:15.965699 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:06:15.965705 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:06:15.965709 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:06:15.965712 | orchestrator | 2026-03-10 01:06:15.965716 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-10 01:06:15.965720 | orchestrator | Tuesday 10 March 2026 01:03:29 +0000 (0:00:00.372) 0:00:51.847 ********* 2026-03-10 01:06:15.965724 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:06:15.965727 | orchestrator | 2026-03-10 01:06:15.965731 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-10 01:06:15.965735 | orchestrator | Tuesday 10 March 2026 01:03:29 +0000 (0:00:00.617) 0:00:52.464 ********* 2026-03-10 01:06:15.965742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-10 01:06:15.965751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-10 01:06:15.965759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-10 01:06:15.965764 | orchestrator | 2026-03-10 01:06:15.965767 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-10 01:06:15.965771 | orchestrator | Tuesday 10 March 2026 01:03:34 +0000 (0:00:04.952) 0:00:57.417 ********* 2026-03-10 01:06:15.965782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-10 01:06:15.965790 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:06:15.965794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-10 01:06:15.965798 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:06:15.965806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-10 01:06:15.965813 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:06:15.965817 | orchestrator | 2026-03-10 01:06:15.965821 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-10 01:06:15.965825 | orchestrator | Tuesday 10 March 2026 01:03:39 +0000 (0:00:05.182) 0:01:02.600 ********* 2026-03-10 01:06:15.965835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-10 01:06:15.965839 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:06:15.965846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-10 01:06:15.965850 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:06:15.965858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-10 01:06:15.965865 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:06:15.965869 | orchestrator | 2026-03-10 01:06:15.965873 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-10 01:06:15.965876 | orchestrator | Tuesday 10 March 2026 01:03:43 +0000 (0:00:03.919) 0:01:06.520 ********* 2026-03-10 01:06:15.965880 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:06:15.965884 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:06:15.965888 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:06:15.965891 | orchestrator | 2026-03-10 01:06:15.965895 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-10 01:06:15.965899 | orchestrator | Tuesday 10 March 2026 01:03:48 +0000 (0:00:04.882) 0:01:11.402 ********* 2026-03-10 01:06:15.965903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-10 01:06:15.965913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-10 01:06:15.965921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-10 01:06:15.965925 | orchestrator | 2026-03-10 01:06:15.965929 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-10 01:06:15.965933 | orchestrator | Tuesday 10 March 2026 01:03:54 +0000 (0:00:06.406) 0:01:17.809 ********* 2026-03-10 01:06:15.965936 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:06:15.965940 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:06:15.965944 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:06:15.965948 | orchestrator | 2026-03-10 01:06:15.965952 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-10 01:06:15.965955 | orchestrator | Tuesday 10 March 2026 01:04:03 +0000 (0:00:08.742) 0:01:26.552 ********* 2026-03-10 01:06:15.965962 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:06:15.965966 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:06:15.965970 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:06:15.965974 | orchestrator | 2026-03-10 01:06:15.965977 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-03-10 01:06:15.965981 | orchestrator | Tuesday 10 March 2026 01:04:09 +0000 (0:00:05.601) 0:01:32.153 ********* 2026-03-10 01:06:15.965985 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:06:15.965991 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:06:15.965995 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:06:15.965999 | orchestrator | 2026-03-10 01:06:15.966003 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-10 01:06:15.966007 | orchestrator | Tuesday 10 March 2026 01:04:16 +0000 (0:00:06.886) 0:01:39.040 ********* 2026-03-10 01:06:15.966010 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:06:15.966047 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:06:15.966052 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:06:15.966055 | orchestrator | 2026-03-10 01:06:15.966059 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-10 01:06:15.966063 | orchestrator | Tuesday 10 March 2026 01:04:21 +0000 (0:00:05.420) 0:01:44.461 ********* 2026-03-10 01:06:15.966067 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:06:15.966070 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:06:15.966074 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:06:15.966078 | orchestrator | 2026-03-10 01:06:15.966082 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-10 01:06:15.966086 | orchestrator | Tuesday 10 March 2026 01:04:28 +0000 (0:00:06.628) 0:01:51.089 ********* 2026-03-10 01:06:15.966089 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:06:15.966093 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:06:15.966097 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:06:15.966100 | orchestrator | 2026-03-10 01:06:15.966104 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-10 01:06:15.966111 | orchestrator | Tuesday 10 March 2026 01:04:28 +0000 (0:00:00.404) 0:01:51.494 ********* 2026-03-10 01:06:15.966115 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-10 01:06:15.966119 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:06:15.966123 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-10 01:06:15.966127 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:06:15.966131 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-10 01:06:15.966134 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:06:15.966139 | orchestrator | 2026-03-10 01:06:15.966143 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-10 01:06:15.966146 | orchestrator | Tuesday 10 March 2026 01:04:33 +0000 (0:00:05.131) 0:01:56.626 ********* 2026-03-10 01:06:15.966150 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:06:15.966154 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:06:15.966158 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:06:15.966161 | orchestrator | 2026-03-10 01:06:15.966165 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-03-10 01:06:15.966169 | orchestrator | Tuesday 10 March 2026 01:04:41 +0000 (0:00:07.642) 0:02:04.268 ********* 2026-03-10 01:06:15.966173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-10 01:06:15.966188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-10 01:06:15.966193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-10 01:06:15.966201 | orchestrator | 2026-03-10 01:06:15.966205 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-10 01:06:15.966208 | orchestrator | Tuesday 10 March 2026 01:04:47 +0000 (0:00:05.842) 0:02:10.111 ********* 2026-03-10 01:06:15.966212 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:06:15.966216 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:06:15.966220 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:06:15.966223 | orchestrator | 2026-03-10 01:06:15.966227 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-03-10 01:06:15.966231 | orchestrator | Tuesday 10 March 2026 01:04:47 +0000 (0:00:00.323) 0:02:10.434 ********* 2026-03-10 01:06:15.966235 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:06:15.966238 | orchestrator | 2026-03-10 01:06:15.966242 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-03-10 01:06:15.966246 | orchestrator | Tuesday 10 March 2026 01:04:50 +0000 (0:00:02.602) 0:02:13.037 ********* 2026-03-10 01:06:15.966250 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:06:15.966253 | orchestrator | 2026-03-10 01:06:15.966257 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-03-10 01:06:15.966261 | orchestrator | Tuesday 10 March 2026 01:04:53 +0000 (0:00:03.123) 0:02:16.161 ********* 2026-03-10 01:06:15.966265 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:06:15.966270 | orchestrator | 2026-03-10 01:06:15.966276 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-03-10 01:06:15.966281 | orchestrator | Tuesday 10 March 2026 01:04:55 +0000 (0:00:02.655) 0:02:18.817 ********* 2026-03-10 01:06:15.966287 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:06:15.966294 | orchestrator | 2026-03-10 01:06:15.966300 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-03-10 01:06:15.966313 | orchestrator | Tuesday 10 March 2026 01:05:29 +0000 (0:00:33.495) 0:02:52.312 ********* 2026-03-10 01:06:15.966322 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:06:15.966328 | orchestrator | 2026-03-10 01:06:15.966333 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-10 01:06:15.966340 | orchestrator | Tuesday 10 March 2026 01:05:32 +0000 (0:00:03.149) 0:02:55.462 ********* 2026-03-10 01:06:15.966346 | orchestrator | 2026-03-10 01:06:15.966390 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-10 01:06:15.966398 | orchestrator | Tuesday 10 March 2026 01:05:33 +0000 (0:00:00.469) 0:02:55.932 ********* 2026-03-10 01:06:15.966404 | orchestrator | 2026-03-10 01:06:15.966412 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-10 01:06:15.966419 | orchestrator | Tuesday 10 March 2026 01:05:33 +0000 (0:00:00.082) 0:02:56.014 ********* 2026-03-10 01:06:15.966425 | orchestrator | 2026-03-10 01:06:15.966431 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-03-10 01:06:15.966437 | orchestrator | Tuesday 10 March 2026 01:05:33 +0000 (0:00:00.087) 0:02:56.101 ********* 2026-03-10 01:06:15.966443 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:06:15.966449 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:06:15.966456 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:06:15.966462 | orchestrator | 2026-03-10 01:06:15.966468 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:06:15.966481 | orchestrator | testbed-node-0 : ok=27  changed=20  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-10 01:06:15.966496 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-10 01:06:15.966502 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-10 01:06:15.966508 | orchestrator | 2026-03-10 01:06:15.966514 | orchestrator | 2026-03-10 01:06:15.966520 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:06:15.966526 | orchestrator | Tuesday 10 March 2026 01:06:13 +0000 (0:00:40.631) 0:03:36.733 ********* 2026-03-10 01:06:15.966532 | orchestrator | =============================================================================== 2026-03-10 01:06:15.966538 | orchestrator | glance : Restart glance-api container ---------------------------------- 40.63s 2026-03-10 01:06:15.966544 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 33.50s 2026-03-10 01:06:15.966550 | orchestrator | glance : Ensuring config directories exist ------------------------------ 9.11s 2026-03-10 01:06:15.966556 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 8.74s 2026-03-10 01:06:15.966562 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.86s 2026-03-10 01:06:15.966568 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 7.64s 2026-03-10 01:06:15.966574 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 6.89s 2026-03-10 01:06:15.966580 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 6.63s 2026-03-10 01:06:15.966586 | orchestrator | glance : Copying over config.json files for services -------------------- 6.41s 2026-03-10 01:06:15.966592 | orchestrator | glance : Check glance containers ---------------------------------------- 5.84s 2026-03-10 01:06:15.966596 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 5.60s 2026-03-10 01:06:15.966599 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 5.42s 2026-03-10 01:06:15.966603 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 5.18s 2026-03-10 01:06:15.966607 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 5.13s 2026-03-10 01:06:15.966611 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 5.09s 2026-03-10 01:06:15.966615 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 5.05s 2026-03-10 01:06:15.966618 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.95s 2026-03-10 01:06:15.966622 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.88s 2026-03-10 01:06:15.966627 | orchestrator | service-ks-register : glance | Creating services ------------------------ 4.68s 2026-03-10 01:06:15.966633 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.28s 2026-03-10 01:06:15.966639 | orchestrator | 2026-03-10 01:06:15 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:06:15.966645 | orchestrator | 2026-03-10 01:06:15 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:19.031293 | orchestrator | 2026-03-10 01:06:19 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:06:19.034808 | orchestrator | 2026-03-10 01:06:19 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:06:19.037441 | orchestrator | 2026-03-10 01:06:19 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:06:19.040485 | orchestrator | 2026-03-10 01:06:19 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:06:19.040551 | orchestrator | 2026-03-10 01:06:19 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:22.077161 | orchestrator | 2026-03-10 01:06:22 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:06:22.078407 | orchestrator | 2026-03-10 01:06:22 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:06:22.081866 | orchestrator | 2026-03-10 01:06:22 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:06:22.083204 | orchestrator | 2026-03-10 01:06:22 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:06:22.083236 | orchestrator | 2026-03-10 01:06:22 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:25.126909 | orchestrator | 2026-03-10 01:06:25 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:06:25.127890 | orchestrator | 2026-03-10 01:06:25 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:06:25.128810 | orchestrator | 2026-03-10 01:06:25 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:06:25.130064 | orchestrator | 2026-03-10 01:06:25 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:06:25.130101 | orchestrator | 2026-03-10 01:06:25 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:28.173841 | orchestrator | 2026-03-10 01:06:28 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:06:28.174569 | orchestrator | 2026-03-10 01:06:28 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:06:28.177723 | orchestrator | 2026-03-10 01:06:28 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state STARTED 2026-03-10 01:06:28.185405 | orchestrator | 2026-03-10 01:06:28 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:06:28.185487 | orchestrator | 2026-03-10 01:06:28 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:31.243535 | orchestrator | 2026-03-10 01:06:31 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:06:31.246668 | orchestrator | 2026-03-10 01:06:31 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:06:31.252610 | orchestrator | 2026-03-10 01:06:31 | INFO  | Task 709aaa03-7a84-4bce-a9de-2333bad5aa00 is in state SUCCESS 2026-03-10 01:06:31.255241 | orchestrator | 2026-03-10 01:06:31.255310 | orchestrator | 2026-03-10 01:06:31.255320 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 01:06:31.255328 | orchestrator | 2026-03-10 01:06:31.255338 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 01:06:31.255374 | orchestrator | Tuesday 10 March 2026 01:03:11 +0000 (0:00:00.550) 0:00:00.550 ********* 2026-03-10 01:06:31.255386 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:06:31.255398 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:06:31.255408 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:06:31.255418 | orchestrator | 2026-03-10 01:06:31.255427 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 01:06:31.255437 | orchestrator | Tuesday 10 March 2026 01:03:12 +0000 (0:00:00.721) 0:00:01.272 ********* 2026-03-10 01:06:31.255447 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-10 01:06:31.255458 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-10 01:06:31.255468 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-10 01:06:31.255478 | orchestrator | 2026-03-10 01:06:31.255488 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-10 01:06:31.255499 | orchestrator | 2026-03-10 01:06:31.255509 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-10 01:06:31.255519 | orchestrator | Tuesday 10 March 2026 01:03:15 +0000 (0:00:02.696) 0:00:03.968 ********* 2026-03-10 01:06:31.255531 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:06:31.255568 | orchestrator | 2026-03-10 01:06:31.255577 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-03-10 01:06:31.255584 | orchestrator | Tuesday 10 March 2026 01:03:16 +0000 (0:00:01.333) 0:00:05.301 ********* 2026-03-10 01:06:31.255591 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-10 01:06:31.255598 | orchestrator | 2026-03-10 01:06:31.255608 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-03-10 01:06:31.255618 | orchestrator | Tuesday 10 March 2026 01:03:20 +0000 (0:00:03.647) 0:00:08.948 ********* 2026-03-10 01:06:31.255630 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-10 01:06:31.255641 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-10 01:06:31.255652 | orchestrator | 2026-03-10 01:06:31.255662 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-10 01:06:31.255672 | orchestrator | Tuesday 10 March 2026 01:03:26 +0000 (0:00:06.056) 0:00:15.005 ********* 2026-03-10 01:06:31.255683 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-10 01:06:31.255694 | orchestrator | 2026-03-10 01:06:31.255706 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-10 01:06:31.255717 | orchestrator | Tuesday 10 March 2026 01:03:29 +0000 (0:00:03.079) 0:00:18.085 ********* 2026-03-10 01:06:31.255729 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-10 01:06:31.255741 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-10 01:06:31.255752 | orchestrator | 2026-03-10 01:06:31.255762 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-10 01:06:31.255772 | orchestrator | Tuesday 10 March 2026 01:03:33 +0000 (0:00:04.124) 0:00:22.209 ********* 2026-03-10 01:06:31.255781 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-10 01:06:31.255792 | orchestrator | 2026-03-10 01:06:31.255803 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-03-10 01:06:31.255814 | orchestrator | Tuesday 10 March 2026 01:03:37 +0000 (0:00:03.905) 0:00:26.114 ********* 2026-03-10 01:06:31.255825 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-10 01:06:31.255837 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-10 01:06:31.255848 | orchestrator | 2026-03-10 01:06:31.255860 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-10 01:06:31.255870 | orchestrator | Tuesday 10 March 2026 01:03:45 +0000 (0:00:07.904) 0:00:34.019 ********* 2026-03-10 01:06:31.255899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-10 01:06:31.256070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-10 01:06:31.256092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-10 01:06:31.256099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.256108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.256119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.256127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.256144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.256151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.256158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.256165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.256175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.256182 | orchestrator | 2026-03-10 01:06:31.256188 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-10 01:06:31.256195 | orchestrator | Tuesday 10 March 2026 01:03:47 +0000 (0:00:02.836) 0:00:36.855 ********* 2026-03-10 01:06:31.256202 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:06:31.256213 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:06:31.256219 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:06:31.256226 | orchestrator | 2026-03-10 01:06:31.256232 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-10 01:06:31.256238 | orchestrator | Tuesday 10 March 2026 01:03:48 +0000 (0:00:00.467) 0:00:37.323 ********* 2026-03-10 01:06:31.256244 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:06:31.256251 | orchestrator | 2026-03-10 01:06:31.256261 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-10 01:06:31.256267 | orchestrator | Tuesday 10 March 2026 01:03:49 +0000 (0:00:01.108) 0:00:38.431 ********* 2026-03-10 01:06:31.256273 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-10 01:06:31.256280 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-10 01:06:31.256286 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-10 01:06:31.256292 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-10 01:06:31.256298 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-10 01:06:31.256304 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-10 01:06:31.256310 | orchestrator | 2026-03-10 01:06:31.256316 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-10 01:06:31.256322 | orchestrator | Tuesday 10 March 2026 01:03:52 +0000 (0:00:02.634) 0:00:41.065 ********* 2026-03-10 01:06:31.256330 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-10 01:06:31.256338 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-10 01:06:31.256370 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-10 01:06:31.256388 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-10 01:06:31.256402 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-10 01:06:31.256409 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-10 01:06:31.256416 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-10 01:06:31.256433 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-10 01:06:31.256455 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-10 01:06:31.256466 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-10 01:06:31.256474 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-10 01:06:31.256481 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-10 01:06:31.256487 | orchestrator | 2026-03-10 01:06:31.256493 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-10 01:06:31.256500 | orchestrator | Tuesday 10 March 2026 01:03:56 +0000 (0:00:04.351) 0:00:45.417 ********* 2026-03-10 01:06:31.256506 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-10 01:06:31.256513 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-10 01:06:31.256519 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-10 01:06:31.256525 | orchestrator | 2026-03-10 01:06:31.256532 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-10 01:06:31.256538 | orchestrator | Tuesday 10 March 2026 01:03:59 +0000 (0:00:02.804) 0:00:48.222 ********* 2026-03-10 01:06:31.256549 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-03-10 01:06:31.256556 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-03-10 01:06:31.256562 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-03-10 01:06:31.256568 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-03-10 01:06:31.256574 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-03-10 01:06:31.256584 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-03-10 01:06:31.256590 | orchestrator | 2026-03-10 01:06:31.256596 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-10 01:06:31.256602 | orchestrator | Tuesday 10 March 2026 01:04:03 +0000 (0:00:04.056) 0:00:52.278 ********* 2026-03-10 01:06:31.256608 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-10 01:06:31.256615 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-10 01:06:31.256621 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-10 01:06:31.256627 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-10 01:06:31.256633 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-10 01:06:31.256639 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-10 01:06:31.256645 | orchestrator | 2026-03-10 01:06:31.256652 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-10 01:06:31.256658 | orchestrator | Tuesday 10 March 2026 01:04:04 +0000 (0:00:01.235) 0:00:53.514 ********* 2026-03-10 01:06:31.256664 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:06:31.256670 | orchestrator | 2026-03-10 01:06:31.256676 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-10 01:06:31.256682 | orchestrator | Tuesday 10 March 2026 01:04:04 +0000 (0:00:00.363) 0:00:53.878 ********* 2026-03-10 01:06:31.256689 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:06:31.256695 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:06:31.256704 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:06:31.256711 | orchestrator | 2026-03-10 01:06:31.256717 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-10 01:06:31.256723 | orchestrator | Tuesday 10 March 2026 01:04:05 +0000 (0:00:00.494) 0:00:54.372 ********* 2026-03-10 01:06:31.256729 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:06:31.256736 | orchestrator | 2026-03-10 01:06:31.256742 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-10 01:06:31.256748 | orchestrator | Tuesday 10 March 2026 01:04:07 +0000 (0:00:01.753) 0:00:56.125 ********* 2026-03-10 01:06:31.256754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-10 01:06:31.256761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-10 01:06:31.256775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-10 01:06:31.256782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.256795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.256802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.256809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.256820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.256830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.256836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.257310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.257335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.257343 | orchestrator | 2026-03-10 01:06:31.257434 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-10 01:06:31.257442 | orchestrator | Tuesday 10 March 2026 01:04:12 +0000 (0:00:05.106) 0:01:01.231 ********* 2026-03-10 01:06:31.257460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-10 01:06:31.257468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 01:06:31.257482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-10 01:06:31.257496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-10 01:06:31.257503 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:06:31.257511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-10 01:06:31.257522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-10 01:06:31.257529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 01:06:31.257540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 01:06:31.257546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-10 01:06:31.257558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-10 01:06:31.257565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-10 01:06:31.257576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-10 01:06:31.257583 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:06:31.257589 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:06:31.257595 | orchestrator | 2026-03-10 01:06:31.257602 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-10 01:06:31.257613 | orchestrator | Tuesday 10 March 2026 01:04:13 +0000 (0:00:01.100) 0:01:02.332 ********* 2026-03-10 01:06:31.257628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-10 01:06:31.257640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 01:06:31.257658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-10 01:06:31.257669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-10 01:06:31.257738 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:06:31.257746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-10 01:06:31.257753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 01:06:31.257767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-10 01:06:31.257774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-10 01:06:31.257784 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:06:31.257791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-10 01:06:31.257803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 01:06:31.257809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-10 01:06:31.257816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-10 01:06:31.257822 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:06:31.257829 | orchestrator | 2026-03-10 01:06:31.257838 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-10 01:06:31.257845 | orchestrator | Tuesday 10 March 2026 01:04:15 +0000 (0:00:02.254) 0:01:04.586 ********* 2026-03-10 01:06:31.257851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-10 01:06:31.257862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-10 01:06:31.257874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-10 01:06:31.257881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.257887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.257897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.257909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.257922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.257930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.257938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.257949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.257957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.257964 | orchestrator | 2026-03-10 01:06:31.257972 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-10 01:06:31.257984 | orchestrator | Tuesday 10 March 2026 01:04:21 +0000 (0:00:05.406) 0:01:09.993 ********* 2026-03-10 01:06:31.257991 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-10 01:06:31.258002 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-10 01:06:31.258010 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-10 01:06:31.258049 | orchestrator | 2026-03-10 01:06:31.258057 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-10 01:06:31.258064 | orchestrator | Tuesday 10 March 2026 01:04:23 +0000 (0:00:02.329) 0:01:12.322 ********* 2026-03-10 01:06:31.258071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-10 01:06:31.258079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-10 01:06:31.258091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-10 01:06:31.258099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.258117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.258125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.258132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.258140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.258148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.258159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.258177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.258188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.258195 | orchestrator | 2026-03-10 01:06:31.258202 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-10 01:06:31.258210 | orchestrator | Tuesday 10 March 2026 01:04:41 +0000 (0:00:18.061) 0:01:30.384 ********* 2026-03-10 01:06:31.258217 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:06:31.258224 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:06:31.258232 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:06:31.258239 | orchestrator | 2026-03-10 01:06:31.258246 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-10 01:06:31.258252 | orchestrator | Tuesday 10 March 2026 01:04:44 +0000 (0:00:02.893) 0:01:33.278 ********* 2026-03-10 01:06:31.258260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-10 01:06:31.258268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 01:06:31.258280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-10 01:06:31.258295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-10 01:06:31.258302 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:06:31.258309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-10 01:06:31.258315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 01:06:31.258322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-10 01:06:31.258332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-10 01:06:31.258345 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:06:31.258381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-10 01:06:31.258388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 01:06:31.258395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-10 01:06:31.258401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-10 01:06:31.258408 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:06:31.258414 | orchestrator | 2026-03-10 01:06:31.258420 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-10 01:06:31.258427 | orchestrator | Tuesday 10 March 2026 01:04:45 +0000 (0:00:01.571) 0:01:34.849 ********* 2026-03-10 01:06:31.258433 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:06:31.258439 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:06:31.258445 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:06:31.258457 | orchestrator | 2026-03-10 01:06:31.258464 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-03-10 01:06:31.258470 | orchestrator | Tuesday 10 March 2026 01:04:46 +0000 (0:00:00.570) 0:01:35.420 ********* 2026-03-10 01:06:31.258477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-10 01:06:31.258515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-10 01:06:31.258522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-10 01:06:31.258529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.258536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.258553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.258560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.258571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.258578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.258584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.258591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.258604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-10 01:06:31.258611 | orchestrator | 2026-03-10 01:06:31.258618 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-10 01:06:31.258624 | orchestrator | Tuesday 10 March 2026 01:04:49 +0000 (0:00:03.295) 0:01:38.715 ********* 2026-03-10 01:06:31.258630 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:06:31.258636 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:06:31.258643 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:06:31.258649 | orchestrator | 2026-03-10 01:06:31.258655 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-10 01:06:31.258661 | orchestrator | Tuesday 10 March 2026 01:04:51 +0000 (0:00:01.200) 0:01:39.915 ********* 2026-03-10 01:06:31.258667 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:06:31.258673 | orchestrator | 2026-03-10 01:06:31.258680 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-10 01:06:31.258686 | orchestrator | Tuesday 10 March 2026 01:04:54 +0000 (0:00:02.999) 0:01:42.915 ********* 2026-03-10 01:06:31.258692 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:06:31.258698 | orchestrator | 2026-03-10 01:06:31.258704 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-10 01:06:31.258723 | orchestrator | Tuesday 10 March 2026 01:04:56 +0000 (0:00:02.685) 0:01:45.600 ********* 2026-03-10 01:06:31.258730 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:06:31.258745 | orchestrator | 2026-03-10 01:06:31.258756 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-10 01:06:31.258769 | orchestrator | Tuesday 10 March 2026 01:05:19 +0000 (0:00:23.279) 0:02:08.880 ********* 2026-03-10 01:06:31.258783 | orchestrator | 2026-03-10 01:06:31.258796 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-10 01:06:31.258805 | orchestrator | Tuesday 10 March 2026 01:05:20 +0000 (0:00:00.075) 0:02:08.956 ********* 2026-03-10 01:06:31.258815 | orchestrator | 2026-03-10 01:06:31.258826 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-10 01:06:31.258836 | orchestrator | Tuesday 10 March 2026 01:05:20 +0000 (0:00:00.123) 0:02:09.079 ********* 2026-03-10 01:06:31.258846 | orchestrator | 2026-03-10 01:06:31.258856 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-10 01:06:31.258866 | orchestrator | Tuesday 10 March 2026 01:05:20 +0000 (0:00:00.092) 0:02:09.172 ********* 2026-03-10 01:06:31.258876 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:06:31.258885 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:06:31.258895 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:06:31.258906 | orchestrator | 2026-03-10 01:06:31.258917 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-10 01:06:31.258928 | orchestrator | Tuesday 10 March 2026 01:05:48 +0000 (0:00:27.961) 0:02:37.134 ********* 2026-03-10 01:06:31.258945 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:06:31.258956 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:06:31.258967 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:06:31.258978 | orchestrator | 2026-03-10 01:06:31.258989 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-10 01:06:31.259001 | orchestrator | Tuesday 10 March 2026 01:05:55 +0000 (0:00:07.361) 0:02:44.495 ********* 2026-03-10 01:06:31.259012 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:06:31.259023 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:06:31.259029 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:06:31.259036 | orchestrator | 2026-03-10 01:06:31.259042 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-10 01:06:31.259048 | orchestrator | Tuesday 10 March 2026 01:06:19 +0000 (0:00:24.226) 0:03:08.722 ********* 2026-03-10 01:06:31.259054 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:06:31.259060 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:06:31.259066 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:06:31.259072 | orchestrator | 2026-03-10 01:06:31.259078 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-10 01:06:31.259085 | orchestrator | Tuesday 10 March 2026 01:06:28 +0000 (0:00:08.332) 0:03:17.055 ********* 2026-03-10 01:06:31.259091 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:06:31.259097 | orchestrator | 2026-03-10 01:06:31.259103 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:06:31.259110 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-10 01:06:31.259118 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-10 01:06:31.259124 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-10 01:06:31.259130 | orchestrator | 2026-03-10 01:06:31.259136 | orchestrator | 2026-03-10 01:06:31.259142 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:06:31.259148 | orchestrator | Tuesday 10 March 2026 01:06:28 +0000 (0:00:00.298) 0:03:17.353 ********* 2026-03-10 01:06:31.259154 | orchestrator | =============================================================================== 2026-03-10 01:06:31.259161 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 27.96s 2026-03-10 01:06:31.259167 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 24.23s 2026-03-10 01:06:31.259173 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 23.28s 2026-03-10 01:06:31.259179 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 18.06s 2026-03-10 01:06:31.259190 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 8.33s 2026-03-10 01:06:31.259196 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.90s 2026-03-10 01:06:31.259202 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 7.36s 2026-03-10 01:06:31.259208 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.06s 2026-03-10 01:06:31.259215 | orchestrator | cinder : Copying over config.json files for services -------------------- 5.41s 2026-03-10 01:06:31.259221 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 5.11s 2026-03-10 01:06:31.259227 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.35s 2026-03-10 01:06:31.259233 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.12s 2026-03-10 01:06:31.259239 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 4.06s 2026-03-10 01:06:31.259245 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.91s 2026-03-10 01:06:31.259256 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.65s 2026-03-10 01:06:31.259262 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.30s 2026-03-10 01:06:31.259268 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.08s 2026-03-10 01:06:31.259280 | orchestrator | cinder : Creating Cinder database --------------------------------------- 3.00s 2026-03-10 01:06:31.259287 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 2.89s 2026-03-10 01:06:31.259293 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.84s 2026-03-10 01:06:31.259299 | orchestrator | 2026-03-10 01:06:31 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:06:31.260296 | orchestrator | 2026-03-10 01:06:31 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:06:31.260772 | orchestrator | 2026-03-10 01:06:31 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:34.302566 | orchestrator | 2026-03-10 01:06:34 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:06:34.303647 | orchestrator | 2026-03-10 01:06:34 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:06:34.305692 | orchestrator | 2026-03-10 01:06:34 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:06:34.307107 | orchestrator | 2026-03-10 01:06:34 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:06:34.307282 | orchestrator | 2026-03-10 01:06:34 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:37.348756 | orchestrator | 2026-03-10 01:06:37 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:06:37.349751 | orchestrator | 2026-03-10 01:06:37 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:06:37.351220 | orchestrator | 2026-03-10 01:06:37 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:06:37.353035 | orchestrator | 2026-03-10 01:06:37 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:06:37.353077 | orchestrator | 2026-03-10 01:06:37 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:40.427310 | orchestrator | 2026-03-10 01:06:40 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:06:40.430436 | orchestrator | 2026-03-10 01:06:40 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:06:40.431780 | orchestrator | 2026-03-10 01:06:40 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:06:40.433596 | orchestrator | 2026-03-10 01:06:40 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:06:40.433653 | orchestrator | 2026-03-10 01:06:40 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:43.468936 | orchestrator | 2026-03-10 01:06:43 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:06:43.470275 | orchestrator | 2026-03-10 01:06:43 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:06:43.471705 | orchestrator | 2026-03-10 01:06:43 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:06:43.473114 | orchestrator | 2026-03-10 01:06:43 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:06:43.473308 | orchestrator | 2026-03-10 01:06:43 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:46.504180 | orchestrator | 2026-03-10 01:06:46 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:06:46.505316 | orchestrator | 2026-03-10 01:06:46 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:06:46.507984 | orchestrator | 2026-03-10 01:06:46 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:06:46.509601 | orchestrator | 2026-03-10 01:06:46 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:06:46.509975 | orchestrator | 2026-03-10 01:06:46 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:49.545972 | orchestrator | 2026-03-10 01:06:49 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:06:49.547838 | orchestrator | 2026-03-10 01:06:49 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:06:49.550061 | orchestrator | 2026-03-10 01:06:49 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:06:49.551381 | orchestrator | 2026-03-10 01:06:49 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:06:49.551507 | orchestrator | 2026-03-10 01:06:49 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:52.594566 | orchestrator | 2026-03-10 01:06:52 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:06:52.596575 | orchestrator | 2026-03-10 01:06:52 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:06:52.598155 | orchestrator | 2026-03-10 01:06:52 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:06:52.599774 | orchestrator | 2026-03-10 01:06:52 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:06:52.599886 | orchestrator | 2026-03-10 01:06:52 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:55.642444 | orchestrator | 2026-03-10 01:06:55 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:06:55.644661 | orchestrator | 2026-03-10 01:06:55 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:06:55.646432 | orchestrator | 2026-03-10 01:06:55 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:06:55.648275 | orchestrator | 2026-03-10 01:06:55 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:06:55.648324 | orchestrator | 2026-03-10 01:06:55 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:58.696532 | orchestrator | 2026-03-10 01:06:58 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:06:58.696663 | orchestrator | 2026-03-10 01:06:58 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:06:58.696958 | orchestrator | 2026-03-10 01:06:58 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:06:58.698452 | orchestrator | 2026-03-10 01:06:58 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:06:58.698527 | orchestrator | 2026-03-10 01:06:58 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:01.747465 | orchestrator | 2026-03-10 01:07:01 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:07:01.749696 | orchestrator | 2026-03-10 01:07:01 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:07:01.752552 | orchestrator | 2026-03-10 01:07:01 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:07:01.755524 | orchestrator | 2026-03-10 01:07:01 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:07:01.755628 | orchestrator | 2026-03-10 01:07:01 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:04.789494 | orchestrator | 2026-03-10 01:07:04 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:07:04.790193 | orchestrator | 2026-03-10 01:07:04 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:07:04.791388 | orchestrator | 2026-03-10 01:07:04 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:07:04.792251 | orchestrator | 2026-03-10 01:07:04 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:07:04.792290 | orchestrator | 2026-03-10 01:07:04 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:07.820743 | orchestrator | 2026-03-10 01:07:07 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:07:07.821321 | orchestrator | 2026-03-10 01:07:07 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:07:07.822455 | orchestrator | 2026-03-10 01:07:07 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:07:07.823585 | orchestrator | 2026-03-10 01:07:07 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:07:07.823618 | orchestrator | 2026-03-10 01:07:07 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:10.856238 | orchestrator | 2026-03-10 01:07:10 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:07:10.857536 | orchestrator | 2026-03-10 01:07:10 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:07:10.859560 | orchestrator | 2026-03-10 01:07:10 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:07:10.861406 | orchestrator | 2026-03-10 01:07:10 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:07:10.861489 | orchestrator | 2026-03-10 01:07:10 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:13.912208 | orchestrator | 2026-03-10 01:07:13 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:07:13.912289 | orchestrator | 2026-03-10 01:07:13 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:07:13.912836 | orchestrator | 2026-03-10 01:07:13 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:07:13.913597 | orchestrator | 2026-03-10 01:07:13 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:07:13.913623 | orchestrator | 2026-03-10 01:07:13 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:16.948109 | orchestrator | 2026-03-10 01:07:16 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:07:16.949069 | orchestrator | 2026-03-10 01:07:16 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:07:16.950321 | orchestrator | 2026-03-10 01:07:16 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:07:16.952472 | orchestrator | 2026-03-10 01:07:16 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:07:16.952521 | orchestrator | 2026-03-10 01:07:16 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:19.989433 | orchestrator | 2026-03-10 01:07:19 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:07:19.990245 | orchestrator | 2026-03-10 01:07:19 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:07:19.991047 | orchestrator | 2026-03-10 01:07:19 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:07:19.992085 | orchestrator | 2026-03-10 01:07:19 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:07:19.992102 | orchestrator | 2026-03-10 01:07:19 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:23.016650 | orchestrator | 2026-03-10 01:07:23 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:07:23.017328 | orchestrator | 2026-03-10 01:07:23 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:07:23.018535 | orchestrator | 2026-03-10 01:07:23 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:07:23.019602 | orchestrator | 2026-03-10 01:07:23 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:07:23.021402 | orchestrator | 2026-03-10 01:07:23 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:26.052779 | orchestrator | 2026-03-10 01:07:26 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:07:26.053671 | orchestrator | 2026-03-10 01:07:26 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:07:26.056207 | orchestrator | 2026-03-10 01:07:26 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:07:26.057492 | orchestrator | 2026-03-10 01:07:26 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:07:26.057548 | orchestrator | 2026-03-10 01:07:26 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:29.179882 | orchestrator | 2026-03-10 01:07:29 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:07:29.180110 | orchestrator | 2026-03-10 01:07:29 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:07:29.181172 | orchestrator | 2026-03-10 01:07:29 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:07:29.181729 | orchestrator | 2026-03-10 01:07:29 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:07:29.181753 | orchestrator | 2026-03-10 01:07:29 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:32.206251 | orchestrator | 2026-03-10 01:07:32 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:07:32.206662 | orchestrator | 2026-03-10 01:07:32 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:07:32.207419 | orchestrator | 2026-03-10 01:07:32 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:07:32.208595 | orchestrator | 2026-03-10 01:07:32 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:07:32.208616 | orchestrator | 2026-03-10 01:07:32 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:35.242112 | orchestrator | 2026-03-10 01:07:35 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:07:35.243231 | orchestrator | 2026-03-10 01:07:35 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:07:35.244099 | orchestrator | 2026-03-10 01:07:35 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:07:35.245481 | orchestrator | 2026-03-10 01:07:35 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:07:35.245549 | orchestrator | 2026-03-10 01:07:35 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:38.279849 | orchestrator | 2026-03-10 01:07:38 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:07:38.280596 | orchestrator | 2026-03-10 01:07:38 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:07:38.282509 | orchestrator | 2026-03-10 01:07:38 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:07:38.283415 | orchestrator | 2026-03-10 01:07:38 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:07:38.283450 | orchestrator | 2026-03-10 01:07:38 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:41.323478 | orchestrator | 2026-03-10 01:07:41 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:07:41.323882 | orchestrator | 2026-03-10 01:07:41 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:07:41.324774 | orchestrator | 2026-03-10 01:07:41 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:07:41.325551 | orchestrator | 2026-03-10 01:07:41 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:07:41.325593 | orchestrator | 2026-03-10 01:07:41 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:44.357574 | orchestrator | 2026-03-10 01:07:44 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:07:44.359008 | orchestrator | 2026-03-10 01:07:44 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:07:44.359745 | orchestrator | 2026-03-10 01:07:44 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:07:44.360537 | orchestrator | 2026-03-10 01:07:44 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:07:44.360578 | orchestrator | 2026-03-10 01:07:44 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:47.394492 | orchestrator | 2026-03-10 01:07:47 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:07:47.395391 | orchestrator | 2026-03-10 01:07:47 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:07:47.397005 | orchestrator | 2026-03-10 01:07:47 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:07:47.397885 | orchestrator | 2026-03-10 01:07:47 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:07:47.397915 | orchestrator | 2026-03-10 01:07:47 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:50.427067 | orchestrator | 2026-03-10 01:07:50 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:07:50.427744 | orchestrator | 2026-03-10 01:07:50 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:07:50.428149 | orchestrator | 2026-03-10 01:07:50 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:07:50.428996 | orchestrator | 2026-03-10 01:07:50 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:07:50.429038 | orchestrator | 2026-03-10 01:07:50 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:53.456848 | orchestrator | 2026-03-10 01:07:53 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:07:53.457041 | orchestrator | 2026-03-10 01:07:53 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:07:53.457461 | orchestrator | 2026-03-10 01:07:53 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:07:53.458139 | orchestrator | 2026-03-10 01:07:53 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:07:53.458176 | orchestrator | 2026-03-10 01:07:53 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:56.479581 | orchestrator | 2026-03-10 01:07:56 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:07:56.480440 | orchestrator | 2026-03-10 01:07:56 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:07:56.480968 | orchestrator | 2026-03-10 01:07:56 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:07:56.481793 | orchestrator | 2026-03-10 01:07:56 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:07:56.481836 | orchestrator | 2026-03-10 01:07:56 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:59.555945 | orchestrator | 2026-03-10 01:07:59 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:07:59.556024 | orchestrator | 2026-03-10 01:07:59 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:07:59.556030 | orchestrator | 2026-03-10 01:07:59 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:07:59.556034 | orchestrator | 2026-03-10 01:07:59 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:07:59.556039 | orchestrator | 2026-03-10 01:07:59 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:02.607902 | orchestrator | 2026-03-10 01:08:02 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:08:02.608560 | orchestrator | 2026-03-10 01:08:02 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:08:02.609469 | orchestrator | 2026-03-10 01:08:02 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:08:02.610286 | orchestrator | 2026-03-10 01:08:02 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:08:02.610439 | orchestrator | 2026-03-10 01:08:02 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:05.652703 | orchestrator | 2026-03-10 01:08:05 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:08:05.652926 | orchestrator | 2026-03-10 01:08:05 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:08:05.657021 | orchestrator | 2026-03-10 01:08:05 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:08:05.657755 | orchestrator | 2026-03-10 01:08:05 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:08:05.657838 | orchestrator | 2026-03-10 01:08:05 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:08.693086 | orchestrator | 2026-03-10 01:08:08 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:08:08.693648 | orchestrator | 2026-03-10 01:08:08 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:08:08.694138 | orchestrator | 2026-03-10 01:08:08 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:08:08.695107 | orchestrator | 2026-03-10 01:08:08 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:08:08.695256 | orchestrator | 2026-03-10 01:08:08 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:11.782794 | orchestrator | 2026-03-10 01:08:11 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:08:11.784253 | orchestrator | 2026-03-10 01:08:11 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:08:11.784551 | orchestrator | 2026-03-10 01:08:11 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:08:11.786621 | orchestrator | 2026-03-10 01:08:11 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:08:11.786654 | orchestrator | 2026-03-10 01:08:11 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:14.822301 | orchestrator | 2026-03-10 01:08:14 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:08:14.823160 | orchestrator | 2026-03-10 01:08:14 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:08:14.823924 | orchestrator | 2026-03-10 01:08:14 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:08:14.824800 | orchestrator | 2026-03-10 01:08:14 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:08:14.824873 | orchestrator | 2026-03-10 01:08:14 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:17.859024 | orchestrator | 2026-03-10 01:08:17 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:08:17.859799 | orchestrator | 2026-03-10 01:08:17 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:08:17.860549 | orchestrator | 2026-03-10 01:08:17 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:08:17.861673 | orchestrator | 2026-03-10 01:08:17 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:08:17.861715 | orchestrator | 2026-03-10 01:08:17 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:20.895541 | orchestrator | 2026-03-10 01:08:20 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:08:20.896769 | orchestrator | 2026-03-10 01:08:20 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:08:20.898255 | orchestrator | 2026-03-10 01:08:20 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:08:20.899993 | orchestrator | 2026-03-10 01:08:20 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:08:20.900050 | orchestrator | 2026-03-10 01:08:20 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:23.939455 | orchestrator | 2026-03-10 01:08:23 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:08:23.939777 | orchestrator | 2026-03-10 01:08:23 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:08:23.941028 | orchestrator | 2026-03-10 01:08:23 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:08:23.941739 | orchestrator | 2026-03-10 01:08:23 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:08:23.941826 | orchestrator | 2026-03-10 01:08:23 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:26.983603 | orchestrator | 2026-03-10 01:08:26 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:08:26.984725 | orchestrator | 2026-03-10 01:08:26 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:08:26.984865 | orchestrator | 2026-03-10 01:08:26 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:08:26.985689 | orchestrator | 2026-03-10 01:08:26 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:08:26.985764 | orchestrator | 2026-03-10 01:08:26 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:30.052150 | orchestrator | 2026-03-10 01:08:30 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:08:30.053557 | orchestrator | 2026-03-10 01:08:30 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:08:30.055293 | orchestrator | 2026-03-10 01:08:30 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:08:30.056383 | orchestrator | 2026-03-10 01:08:30 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:08:30.056413 | orchestrator | 2026-03-10 01:08:30 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:33.094816 | orchestrator | 2026-03-10 01:08:33 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:08:33.094958 | orchestrator | 2026-03-10 01:08:33 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:08:33.099435 | orchestrator | 2026-03-10 01:08:33 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:08:33.100191 | orchestrator | 2026-03-10 01:08:33 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:08:33.100339 | orchestrator | 2026-03-10 01:08:33 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:36.128618 | orchestrator | 2026-03-10 01:08:36 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:08:36.129035 | orchestrator | 2026-03-10 01:08:36 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:08:36.129679 | orchestrator | 2026-03-10 01:08:36 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:08:36.131245 | orchestrator | 2026-03-10 01:08:36 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:08:36.131273 | orchestrator | 2026-03-10 01:08:36 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:39.166301 | orchestrator | 2026-03-10 01:08:39 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:08:39.166752 | orchestrator | 2026-03-10 01:08:39 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:08:39.168529 | orchestrator | 2026-03-10 01:08:39 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:08:39.169464 | orchestrator | 2026-03-10 01:08:39 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:08:39.169494 | orchestrator | 2026-03-10 01:08:39 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:42.198151 | orchestrator | 2026-03-10 01:08:42 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:08:42.199040 | orchestrator | 2026-03-10 01:08:42 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:08:42.200186 | orchestrator | 2026-03-10 01:08:42 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:08:42.202102 | orchestrator | 2026-03-10 01:08:42 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state STARTED 2026-03-10 01:08:42.202146 | orchestrator | 2026-03-10 01:08:42 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:45.265859 | orchestrator | 2026-03-10 01:08:45 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:08:45.265965 | orchestrator | 2026-03-10 01:08:45 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:08:45.269605 | orchestrator | 2026-03-10 01:08:45 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:08:45.270833 | orchestrator | 2026-03-10 01:08:45 | INFO  | Task 22ad40ca-577f-4c44-8a6e-3b7ca4d890cd is in state SUCCESS 2026-03-10 01:08:45.272648 | orchestrator | 2026-03-10 01:08:45.274268 | orchestrator | 2026-03-10 01:08:45.274324 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 01:08:45.274362 | orchestrator | 2026-03-10 01:08:45.274368 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 01:08:45.274374 | orchestrator | Tuesday 10 March 2026 01:06:19 +0000 (0:00:00.614) 0:00:00.614 ********* 2026-03-10 01:08:45.274379 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:08:45.274386 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:08:45.274391 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:08:45.274396 | orchestrator | 2026-03-10 01:08:45.274401 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 01:08:45.274406 | orchestrator | Tuesday 10 March 2026 01:06:20 +0000 (0:00:00.679) 0:00:01.294 ********* 2026-03-10 01:08:45.274412 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-10 01:08:45.274417 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-10 01:08:45.274422 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-10 01:08:45.274427 | orchestrator | 2026-03-10 01:08:45.274432 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-10 01:08:45.274437 | orchestrator | 2026-03-10 01:08:45.274442 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-10 01:08:45.274447 | orchestrator | Tuesday 10 March 2026 01:06:21 +0000 (0:00:00.937) 0:00:02.231 ********* 2026-03-10 01:08:45.274453 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:08:45.274458 | orchestrator | 2026-03-10 01:08:45.274463 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-03-10 01:08:45.274468 | orchestrator | Tuesday 10 March 2026 01:06:22 +0000 (0:00:01.042) 0:00:03.274 ********* 2026-03-10 01:08:45.274474 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-10 01:08:45.274479 | orchestrator | 2026-03-10 01:08:45.274484 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-03-10 01:08:45.274489 | orchestrator | Tuesday 10 March 2026 01:06:26 +0000 (0:00:04.349) 0:00:07.624 ********* 2026-03-10 01:08:45.274494 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-10 01:08:45.274500 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-10 01:08:45.274505 | orchestrator | 2026-03-10 01:08:45.274510 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-10 01:08:45.274515 | orchestrator | Tuesday 10 March 2026 01:06:34 +0000 (0:00:07.384) 0:00:15.008 ********* 2026-03-10 01:08:45.274537 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-10 01:08:45.274547 | orchestrator | 2026-03-10 01:08:45.274556 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-10 01:08:45.274564 | orchestrator | Tuesday 10 March 2026 01:06:38 +0000 (0:00:04.050) 0:00:19.059 ********* 2026-03-10 01:08:45.274573 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-10 01:08:45.274582 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-10 01:08:45.274590 | orchestrator | 2026-03-10 01:08:45.274599 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-10 01:08:45.274608 | orchestrator | Tuesday 10 March 2026 01:06:42 +0000 (0:00:04.363) 0:00:23.422 ********* 2026-03-10 01:08:45.274616 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-10 01:08:45.274625 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-10 01:08:45.274634 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-10 01:08:45.274643 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-10 01:08:45.274652 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-10 01:08:45.274662 | orchestrator | 2026-03-10 01:08:45.274668 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-03-10 01:08:45.274673 | orchestrator | Tuesday 10 March 2026 01:07:00 +0000 (0:00:17.362) 0:00:40.785 ********* 2026-03-10 01:08:45.274685 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-10 01:08:45.274690 | orchestrator | 2026-03-10 01:08:45.274695 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-10 01:08:45.274700 | orchestrator | Tuesday 10 March 2026 01:07:04 +0000 (0:00:04.417) 0:00:45.203 ********* 2026-03-10 01:08:45.274709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-10 01:08:45.274729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-10 01:08:45.274736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-10 01:08:45.274745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:45.274755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:45.274764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:45.274775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:45.274782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:45.274788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:45.274793 | orchestrator | 2026-03-10 01:08:45.274798 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-10 01:08:45.274881 | orchestrator | Tuesday 10 March 2026 01:07:06 +0000 (0:00:02.411) 0:00:47.614 ********* 2026-03-10 01:08:45.274887 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-10 01:08:45.274893 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-10 01:08:45.274921 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-10 01:08:45.274949 | orchestrator | 2026-03-10 01:08:45.274962 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-10 01:08:45.274968 | orchestrator | Tuesday 10 March 2026 01:07:07 +0000 (0:00:01.135) 0:00:48.750 ********* 2026-03-10 01:08:45.274981 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:08:45.274988 | orchestrator | 2026-03-10 01:08:45.274994 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-10 01:08:45.275006 | orchestrator | Tuesday 10 March 2026 01:07:08 +0000 (0:00:00.245) 0:00:48.996 ********* 2026-03-10 01:08:45.275012 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:08:45.275018 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:08:45.275024 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:08:45.275029 | orchestrator | 2026-03-10 01:08:45.275034 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-10 01:08:45.275039 | orchestrator | Tuesday 10 March 2026 01:07:08 +0000 (0:00:00.538) 0:00:49.534 ********* 2026-03-10 01:08:45.275045 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:08:45.275050 | orchestrator | 2026-03-10 01:08:45.275055 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-10 01:08:45.275060 | orchestrator | Tuesday 10 March 2026 01:07:09 +0000 (0:00:00.914) 0:00:50.449 ********* 2026-03-10 01:08:45.275066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-10 01:08:45.275079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-10 01:08:45.275088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-10 01:08:45.275106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:45.275124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:45.275133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:45.275143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:45.275159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:45.275168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:45.275176 | orchestrator | 2026-03-10 01:08:45.275181 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-10 01:08:45.275186 | orchestrator | Tuesday 10 March 2026 01:07:13 +0000 (0:00:04.235) 0:00:54.685 ********* 2026-03-10 01:08:45.275199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-10 01:08:45.275205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-10 01:08:45.275211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:08:45.275216 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:08:45.275227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-10 01:08:45.275233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-10 01:08:45.275238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:08:45.275247 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:08:45.275255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-10 01:08:45.275261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-10 01:08:45.275266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:08:45.275272 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:08:45.275277 | orchestrator | 2026-03-10 01:08:45.275286 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-10 01:08:45.275291 | orchestrator | Tuesday 10 March 2026 01:07:16 +0000 (0:00:02.201) 0:00:56.889 ********* 2026-03-10 01:08:45.275297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-10 01:08:45.275306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-10 01:08:45.275372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:08:45.275378 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:08:45.275384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-10 01:08:45.275390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-10 01:08:45.275400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:08:45.275406 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:08:45.275411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-10 01:08:45.275423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-10 01:08:45.275429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:08:45.275434 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:08:45.275439 | orchestrator | 2026-03-10 01:08:45.275444 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-10 01:08:45.275449 | orchestrator | Tuesday 10 March 2026 01:07:18 +0000 (0:00:02.041) 0:00:58.931 ********* 2026-03-10 01:08:45.275455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-10 01:08:45.275613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-10 01:08:45.275626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-10 01:08:45.275635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:45.275641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:45.275646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:45.275656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:45.275662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:45.275672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:45.275677 | orchestrator | 2026-03-10 01:08:45.275682 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-10 01:08:45.275687 | orchestrator | Tuesday 10 March 2026 01:07:22 +0000 (0:00:04.681) 0:01:03.612 ********* 2026-03-10 01:08:45.275692 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:08:45.275698 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:08:45.275703 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:08:45.275708 | orchestrator | 2026-03-10 01:08:45.275713 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-10 01:08:45.275718 | orchestrator | Tuesday 10 March 2026 01:07:25 +0000 (0:00:03.068) 0:01:06.681 ********* 2026-03-10 01:08:45.275726 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-10 01:08:45.275732 | orchestrator | 2026-03-10 01:08:45.275737 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-10 01:08:45.275742 | orchestrator | Tuesday 10 March 2026 01:07:27 +0000 (0:00:01.993) 0:01:08.674 ********* 2026-03-10 01:08:45.275747 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:08:45.275752 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:08:45.275757 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:08:45.275762 | orchestrator | 2026-03-10 01:08:45.275768 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-10 01:08:45.275773 | orchestrator | Tuesday 10 March 2026 01:07:30 +0000 (0:00:02.309) 0:01:10.984 ********* 2026-03-10 01:08:45.275778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-10 01:08:45.275787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-10 01:08:45.275797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-10 01:08:45.275802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:45.275811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:45.275816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:45.275822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:45.275835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:45.275841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:45.275846 | orchestrator | 2026-03-10 01:08:45.275851 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-10 01:08:45.275857 | orchestrator | Tuesday 10 March 2026 01:07:43 +0000 (0:00:13.538) 0:01:24.522 ********* 2026-03-10 01:08:45.275865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-10 01:08:45.275870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-10 01:08:45.275876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:08:45.275881 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:08:45.275888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-10 01:08:45.275898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-10 01:08:45.275903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:08:45.275912 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:08:45.275924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-10 01:08:45.275932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-10 01:08:45.275943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:08:45.275962 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:08:45.275971 | orchestrator | 2026-03-10 01:08:45.275978 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-03-10 01:08:45.275986 | orchestrator | Tuesday 10 March 2026 01:07:45 +0000 (0:00:01.386) 0:01:25.908 ********* 2026-03-10 01:08:45.276000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-10 01:08:45.276009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-10 01:08:45.276021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-10 01:08:45.276030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:45.276047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:45.276060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:45.276068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:45.276076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:45.276088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:45.276097 | orchestrator | 2026-03-10 01:08:45.276105 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-10 01:08:45.276113 | orchestrator | Tuesday 10 March 2026 01:07:50 +0000 (0:00:05.523) 0:01:31.432 ********* 2026-03-10 01:08:45.276121 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:08:45.276130 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:08:45.276139 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:08:45.276147 | orchestrator | 2026-03-10 01:08:45.276155 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-10 01:08:45.276170 | orchestrator | Tuesday 10 March 2026 01:07:51 +0000 (0:00:00.805) 0:01:32.237 ********* 2026-03-10 01:08:45.276178 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:08:45.276186 | orchestrator | 2026-03-10 01:08:45.276196 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-10 01:08:45.276202 | orchestrator | Tuesday 10 March 2026 01:07:53 +0000 (0:00:02.245) 0:01:34.483 ********* 2026-03-10 01:08:45.276207 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:08:45.276212 | orchestrator | 2026-03-10 01:08:45.276218 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-10 01:08:45.276223 | orchestrator | Tuesday 10 March 2026 01:07:56 +0000 (0:00:02.591) 0:01:37.074 ********* 2026-03-10 01:08:45.276228 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:08:45.276234 | orchestrator | 2026-03-10 01:08:45.276241 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-10 01:08:45.276247 | orchestrator | Tuesday 10 March 2026 01:08:08 +0000 (0:00:12.579) 0:01:49.654 ********* 2026-03-10 01:08:45.276253 | orchestrator | 2026-03-10 01:08:45.276259 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-10 01:08:45.276265 | orchestrator | Tuesday 10 March 2026 01:08:09 +0000 (0:00:00.164) 0:01:49.819 ********* 2026-03-10 01:08:45.276272 | orchestrator | 2026-03-10 01:08:45.276278 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-10 01:08:45.276284 | orchestrator | Tuesday 10 March 2026 01:08:09 +0000 (0:00:00.146) 0:01:49.965 ********* 2026-03-10 01:08:45.276290 | orchestrator | 2026-03-10 01:08:45.276297 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-10 01:08:45.276303 | orchestrator | Tuesday 10 March 2026 01:08:09 +0000 (0:00:00.174) 0:01:50.139 ********* 2026-03-10 01:08:45.276326 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:08:45.276333 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:08:45.276339 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:08:45.276345 | orchestrator | 2026-03-10 01:08:45.276352 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-10 01:08:45.276358 | orchestrator | Tuesday 10 March 2026 01:08:24 +0000 (0:00:15.369) 0:02:05.509 ********* 2026-03-10 01:08:45.276365 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:08:45.276371 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:08:45.276383 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:08:45.276389 | orchestrator | 2026-03-10 01:08:45.276396 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-10 01:08:45.276403 | orchestrator | Tuesday 10 March 2026 01:08:32 +0000 (0:00:07.719) 0:02:13.229 ********* 2026-03-10 01:08:45.276409 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:08:45.276415 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:08:45.276421 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:08:45.276427 | orchestrator | 2026-03-10 01:08:45.276434 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:08:45.276442 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-10 01:08:45.276449 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-10 01:08:45.276456 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-10 01:08:45.276463 | orchestrator | 2026-03-10 01:08:45.276469 | orchestrator | 2026-03-10 01:08:45.276475 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:08:45.276481 | orchestrator | Tuesday 10 March 2026 01:08:43 +0000 (0:00:11.453) 0:02:24.682 ********* 2026-03-10 01:08:45.276488 | orchestrator | =============================================================================== 2026-03-10 01:08:45.276494 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 17.36s 2026-03-10 01:08:45.276506 | orchestrator | barbican : Restart barbican-api container ------------------------------ 15.37s 2026-03-10 01:08:45.276512 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 13.54s 2026-03-10 01:08:45.276519 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.58s 2026-03-10 01:08:45.276525 | orchestrator | barbican : Restart barbican-worker container --------------------------- 11.45s 2026-03-10 01:08:45.276531 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 7.72s 2026-03-10 01:08:45.276538 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.38s 2026-03-10 01:08:45.276544 | orchestrator | barbican : Check barbican containers ------------------------------------ 5.52s 2026-03-10 01:08:45.276550 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.68s 2026-03-10 01:08:45.276561 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.42s 2026-03-10 01:08:45.276567 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.36s 2026-03-10 01:08:45.276573 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 4.35s 2026-03-10 01:08:45.276580 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.24s 2026-03-10 01:08:45.276586 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 4.05s 2026-03-10 01:08:45.276593 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 3.07s 2026-03-10 01:08:45.276599 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.59s 2026-03-10 01:08:45.276606 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.41s 2026-03-10 01:08:45.276612 | orchestrator | barbican : Copying over barbican-api-paste.ini -------------------------- 2.31s 2026-03-10 01:08:45.276617 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.25s 2026-03-10 01:08:45.276623 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 2.20s 2026-03-10 01:08:45.276769 | orchestrator | 2026-03-10 01:08:45 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:48.315610 | orchestrator | 2026-03-10 01:08:48 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:08:48.316282 | orchestrator | 2026-03-10 01:08:48 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:08:48.317631 | orchestrator | 2026-03-10 01:08:48 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:08:48.318621 | orchestrator | 2026-03-10 01:08:48 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:08:48.318665 | orchestrator | 2026-03-10 01:08:48 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:51.353914 | orchestrator | 2026-03-10 01:08:51 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:08:51.354933 | orchestrator | 2026-03-10 01:08:51 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:08:51.357632 | orchestrator | 2026-03-10 01:08:51 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:08:51.360473 | orchestrator | 2026-03-10 01:08:51 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:08:51.360543 | orchestrator | 2026-03-10 01:08:51 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:54.400962 | orchestrator | 2026-03-10 01:08:54 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:08:54.401760 | orchestrator | 2026-03-10 01:08:54 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:08:54.403204 | orchestrator | 2026-03-10 01:08:54 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:08:54.403950 | orchestrator | 2026-03-10 01:08:54 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:08:54.404195 | orchestrator | 2026-03-10 01:08:54 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:57.430246 | orchestrator | 2026-03-10 01:08:57 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:08:57.430859 | orchestrator | 2026-03-10 01:08:57 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:08:57.431759 | orchestrator | 2026-03-10 01:08:57 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:08:57.433954 | orchestrator | 2026-03-10 01:08:57 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:08:57.433988 | orchestrator | 2026-03-10 01:08:57 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:00.460033 | orchestrator | 2026-03-10 01:09:00 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:09:00.460599 | orchestrator | 2026-03-10 01:09:00 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:09:00.461722 | orchestrator | 2026-03-10 01:09:00 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:09:00.462777 | orchestrator | 2026-03-10 01:09:00 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:09:00.462811 | orchestrator | 2026-03-10 01:09:00 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:03.491590 | orchestrator | 2026-03-10 01:09:03 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:09:03.492521 | orchestrator | 2026-03-10 01:09:03 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:09:03.493091 | orchestrator | 2026-03-10 01:09:03 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:09:03.493965 | orchestrator | 2026-03-10 01:09:03 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:09:03.493987 | orchestrator | 2026-03-10 01:09:03 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:06.520487 | orchestrator | 2026-03-10 01:09:06 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:09:06.521022 | orchestrator | 2026-03-10 01:09:06 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:09:06.521858 | orchestrator | 2026-03-10 01:09:06 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:09:06.522592 | orchestrator | 2026-03-10 01:09:06 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:09:06.522833 | orchestrator | 2026-03-10 01:09:06 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:09.544553 | orchestrator | 2026-03-10 01:09:09 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:09:09.544785 | orchestrator | 2026-03-10 01:09:09 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:09:09.545555 | orchestrator | 2026-03-10 01:09:09 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:09:09.546409 | orchestrator | 2026-03-10 01:09:09 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:09:09.546433 | orchestrator | 2026-03-10 01:09:09 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:12.601142 | orchestrator | 2026-03-10 01:09:12 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:09:12.601405 | orchestrator | 2026-03-10 01:09:12 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:09:12.603160 | orchestrator | 2026-03-10 01:09:12 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:09:12.603748 | orchestrator | 2026-03-10 01:09:12 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:09:12.603770 | orchestrator | 2026-03-10 01:09:12 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:15.645759 | orchestrator | 2026-03-10 01:09:15 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:09:15.646573 | orchestrator | 2026-03-10 01:09:15 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:09:15.649961 | orchestrator | 2026-03-10 01:09:15 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:09:15.650935 | orchestrator | 2026-03-10 01:09:15 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:09:15.650974 | orchestrator | 2026-03-10 01:09:15 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:18.691392 | orchestrator | 2026-03-10 01:09:18 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:09:18.692089 | orchestrator | 2026-03-10 01:09:18 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:09:18.693127 | orchestrator | 2026-03-10 01:09:18 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:09:18.694594 | orchestrator | 2026-03-10 01:09:18 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:09:18.694642 | orchestrator | 2026-03-10 01:09:18 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:21.732068 | orchestrator | 2026-03-10 01:09:21 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:09:21.732581 | orchestrator | 2026-03-10 01:09:21 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:09:21.734205 | orchestrator | 2026-03-10 01:09:21 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:09:21.735908 | orchestrator | 2026-03-10 01:09:21 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:09:21.735960 | orchestrator | 2026-03-10 01:09:21 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:24.766798 | orchestrator | 2026-03-10 01:09:24 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:09:24.767211 | orchestrator | 2026-03-10 01:09:24 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:09:24.770602 | orchestrator | 2026-03-10 01:09:24 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:09:24.772642 | orchestrator | 2026-03-10 01:09:24 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:09:24.773003 | orchestrator | 2026-03-10 01:09:24 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:27.809597 | orchestrator | 2026-03-10 01:09:27 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:09:27.809890 | orchestrator | 2026-03-10 01:09:27 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:09:27.810663 | orchestrator | 2026-03-10 01:09:27 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:09:27.811399 | orchestrator | 2026-03-10 01:09:27 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:09:27.811612 | orchestrator | 2026-03-10 01:09:27 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:30.871290 | orchestrator | 2026-03-10 01:09:30 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:09:30.879515 | orchestrator | 2026-03-10 01:09:30 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:09:30.880784 | orchestrator | 2026-03-10 01:09:30 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:09:30.881636 | orchestrator | 2026-03-10 01:09:30 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:09:30.881805 | orchestrator | 2026-03-10 01:09:30 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:33.919811 | orchestrator | 2026-03-10 01:09:33 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:09:33.921110 | orchestrator | 2026-03-10 01:09:33 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:09:33.921967 | orchestrator | 2026-03-10 01:09:33 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:09:33.923172 | orchestrator | 2026-03-10 01:09:33 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:09:33.923206 | orchestrator | 2026-03-10 01:09:33 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:36.979599 | orchestrator | 2026-03-10 01:09:36 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:09:36.980375 | orchestrator | 2026-03-10 01:09:36 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:09:36.982709 | orchestrator | 2026-03-10 01:09:36 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:09:36.983615 | orchestrator | 2026-03-10 01:09:36 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:09:36.983919 | orchestrator | 2026-03-10 01:09:36 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:40.078380 | orchestrator | 2026-03-10 01:09:40 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:09:40.079699 | orchestrator | 2026-03-10 01:09:40 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:09:40.080950 | orchestrator | 2026-03-10 01:09:40 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:09:40.082310 | orchestrator | 2026-03-10 01:09:40 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:09:40.082375 | orchestrator | 2026-03-10 01:09:40 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:43.122502 | orchestrator | 2026-03-10 01:09:43 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:09:43.125644 | orchestrator | 2026-03-10 01:09:43 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:09:43.127559 | orchestrator | 2026-03-10 01:09:43 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:09:43.130156 | orchestrator | 2026-03-10 01:09:43 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:09:43.130500 | orchestrator | 2026-03-10 01:09:43 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:46.166849 | orchestrator | 2026-03-10 01:09:46 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:09:46.167456 | orchestrator | 2026-03-10 01:09:46 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:09:46.169200 | orchestrator | 2026-03-10 01:09:46 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:09:46.170517 | orchestrator | 2026-03-10 01:09:46 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:09:46.170816 | orchestrator | 2026-03-10 01:09:46 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:49.214142 | orchestrator | 2026-03-10 01:09:49 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:09:49.214973 | orchestrator | 2026-03-10 01:09:49 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:09:49.216680 | orchestrator | 2026-03-10 01:09:49 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:09:49.220154 | orchestrator | 2026-03-10 01:09:49 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:09:49.220222 | orchestrator | 2026-03-10 01:09:49 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:52.260184 | orchestrator | 2026-03-10 01:09:52 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:09:52.260929 | orchestrator | 2026-03-10 01:09:52 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:09:52.262148 | orchestrator | 2026-03-10 01:09:52 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:09:52.263823 | orchestrator | 2026-03-10 01:09:52 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:09:52.263869 | orchestrator | 2026-03-10 01:09:52 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:55.318268 | orchestrator | 2026-03-10 01:09:55 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:09:55.318768 | orchestrator | 2026-03-10 01:09:55 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:09:55.319743 | orchestrator | 2026-03-10 01:09:55 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:09:55.320586 | orchestrator | 2026-03-10 01:09:55 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:09:55.320693 | orchestrator | 2026-03-10 01:09:55 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:58.354831 | orchestrator | 2026-03-10 01:09:58 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:09:58.355739 | orchestrator | 2026-03-10 01:09:58 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:09:58.356510 | orchestrator | 2026-03-10 01:09:58 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:09:58.358172 | orchestrator | 2026-03-10 01:09:58 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:09:58.358239 | orchestrator | 2026-03-10 01:09:58 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:01.407636 | orchestrator | 2026-03-10 01:10:01 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:10:01.409032 | orchestrator | 2026-03-10 01:10:01 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:10:01.410612 | orchestrator | 2026-03-10 01:10:01 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:10:01.414481 | orchestrator | 2026-03-10 01:10:01 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:10:01.414597 | orchestrator | 2026-03-10 01:10:01 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:04.461186 | orchestrator | 2026-03-10 01:10:04 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:10:04.462176 | orchestrator | 2026-03-10 01:10:04 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:10:04.462913 | orchestrator | 2026-03-10 01:10:04 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:10:04.463821 | orchestrator | 2026-03-10 01:10:04 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:10:04.463855 | orchestrator | 2026-03-10 01:10:04 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:07.505255 | orchestrator | 2026-03-10 01:10:07 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:10:07.507054 | orchestrator | 2026-03-10 01:10:07 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:10:07.507582 | orchestrator | 2026-03-10 01:10:07 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:10:07.509085 | orchestrator | 2026-03-10 01:10:07 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:10:07.509198 | orchestrator | 2026-03-10 01:10:07 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:10.548586 | orchestrator | 2026-03-10 01:10:10 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:10:10.548646 | orchestrator | 2026-03-10 01:10:10 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:10:10.548690 | orchestrator | 2026-03-10 01:10:10 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:10:10.549437 | orchestrator | 2026-03-10 01:10:10 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:10:10.549506 | orchestrator | 2026-03-10 01:10:10 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:13.596865 | orchestrator | 2026-03-10 01:10:13 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:10:13.598689 | orchestrator | 2026-03-10 01:10:13 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:10:13.600676 | orchestrator | 2026-03-10 01:10:13 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:10:13.602832 | orchestrator | 2026-03-10 01:10:13 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:10:13.602921 | orchestrator | 2026-03-10 01:10:13 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:16.639918 | orchestrator | 2026-03-10 01:10:16 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:10:16.639995 | orchestrator | 2026-03-10 01:10:16 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:10:16.640729 | orchestrator | 2026-03-10 01:10:16 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:10:16.641693 | orchestrator | 2026-03-10 01:10:16 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:10:16.641758 | orchestrator | 2026-03-10 01:10:16 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:19.681667 | orchestrator | 2026-03-10 01:10:19 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:10:19.682819 | orchestrator | 2026-03-10 01:10:19 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:10:19.685671 | orchestrator | 2026-03-10 01:10:19 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:10:19.687391 | orchestrator | 2026-03-10 01:10:19 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state STARTED 2026-03-10 01:10:19.687465 | orchestrator | 2026-03-10 01:10:19 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:22.732599 | orchestrator | 2026-03-10 01:10:22 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:10:22.734497 | orchestrator | 2026-03-10 01:10:22 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:10:22.736503 | orchestrator | 2026-03-10 01:10:22 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:10:22.738845 | orchestrator | 2026-03-10 01:10:22 | INFO  | Task 5474029a-fea8-4dcb-99a1-65801f4edc6c is in state STARTED 2026-03-10 01:10:22.743420 | orchestrator | 2026-03-10 01:10:22 | INFO  | Task 35c48d77-c76b-466d-ba49-05635be6e523 is in state SUCCESS 2026-03-10 01:10:22.745975 | orchestrator | 2026-03-10 01:10:22.746065 | orchestrator | 2026-03-10 01:10:22.746074 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 01:10:22.746080 | orchestrator | 2026-03-10 01:10:22.746084 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 01:10:22.746089 | orchestrator | Tuesday 10 March 2026 01:06:34 +0000 (0:00:00.286) 0:00:00.286 ********* 2026-03-10 01:10:22.746093 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:10:22.746098 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:10:22.746102 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:10:22.746106 | orchestrator | 2026-03-10 01:10:22.746110 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 01:10:22.746115 | orchestrator | Tuesday 10 March 2026 01:06:34 +0000 (0:00:00.346) 0:00:00.632 ********* 2026-03-10 01:10:22.746119 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-10 01:10:22.746124 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-10 01:10:22.746127 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-10 01:10:22.746131 | orchestrator | 2026-03-10 01:10:22.746135 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-10 01:10:22.746139 | orchestrator | 2026-03-10 01:10:22.746143 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-10 01:10:22.746146 | orchestrator | Tuesday 10 March 2026 01:06:35 +0000 (0:00:00.473) 0:00:01.105 ********* 2026-03-10 01:10:22.746163 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:10:22.746167 | orchestrator | 2026-03-10 01:10:22.746171 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-03-10 01:10:22.746175 | orchestrator | Tuesday 10 March 2026 01:06:35 +0000 (0:00:00.628) 0:00:01.734 ********* 2026-03-10 01:10:22.746179 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-10 01:10:22.746182 | orchestrator | 2026-03-10 01:10:22.746186 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-03-10 01:10:22.746190 | orchestrator | Tuesday 10 March 2026 01:06:39 +0000 (0:00:04.002) 0:00:05.736 ********* 2026-03-10 01:10:22.746194 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-10 01:10:22.746198 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-10 01:10:22.746202 | orchestrator | 2026-03-10 01:10:22.746206 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-10 01:10:22.746209 | orchestrator | Tuesday 10 March 2026 01:06:46 +0000 (0:00:06.758) 0:00:12.494 ********* 2026-03-10 01:10:22.746213 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-10 01:10:22.746217 | orchestrator | 2026-03-10 01:10:22.746221 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-10 01:10:22.746225 | orchestrator | Tuesday 10 March 2026 01:06:50 +0000 (0:00:03.613) 0:00:16.108 ********* 2026-03-10 01:10:22.746229 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-10 01:10:22.746233 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-10 01:10:22.746251 | orchestrator | 2026-03-10 01:10:22.746256 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-10 01:10:22.746259 | orchestrator | Tuesday 10 March 2026 01:06:54 +0000 (0:00:04.461) 0:00:20.569 ********* 2026-03-10 01:10:22.746263 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-10 01:10:22.746267 | orchestrator | 2026-03-10 01:10:22.746271 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-03-10 01:10:22.746323 | orchestrator | Tuesday 10 March 2026 01:06:58 +0000 (0:00:04.014) 0:00:24.584 ********* 2026-03-10 01:10:22.746327 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-10 01:10:22.746331 | orchestrator | 2026-03-10 01:10:22.746335 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-10 01:10:22.746339 | orchestrator | Tuesday 10 March 2026 01:07:03 +0000 (0:00:04.711) 0:00:29.295 ********* 2026-03-10 01:10:22.746345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-10 01:10:22.746368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-10 01:10:22.746377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-10 01:10:22.746382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:10:22.746393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:10:22.746397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:10:22.746402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.746411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.746418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.746423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.746433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.746437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.746441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.746445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.746451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.746457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.746462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.746500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.746504 | orchestrator | 2026-03-10 01:10:22.746508 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-10 01:10:22.746512 | orchestrator | Tuesday 10 March 2026 01:07:07 +0000 (0:00:03.782) 0:00:33.078 ********* 2026-03-10 01:10:22.746516 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:10:22.746520 | orchestrator | 2026-03-10 01:10:22.746524 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-10 01:10:22.746528 | orchestrator | Tuesday 10 March 2026 01:07:07 +0000 (0:00:00.361) 0:00:33.440 ********* 2026-03-10 01:10:22.746532 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:10:22.746535 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:10:22.746539 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:10:22.746592 | orchestrator | 2026-03-10 01:10:22.746596 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-10 01:10:22.746601 | orchestrator | Tuesday 10 March 2026 01:07:07 +0000 (0:00:00.338) 0:00:33.778 ********* 2026-03-10 01:10:22.746605 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:10:22.746610 | orchestrator | 2026-03-10 01:10:22.746626 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-10 01:10:22.746630 | orchestrator | Tuesday 10 March 2026 01:07:08 +0000 (0:00:00.839) 0:00:34.618 ********* 2026-03-10 01:10:22.746638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-10 01:10:22.746643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-10 01:10:22.746654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-10 01:10:22.746659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:10:22.746664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:10:22.746668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:10:22.746676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.746680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.746692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.746697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.746701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.746706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.746710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.746718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.746729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.746733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.746738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.746742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.746746 | orchestrator | 2026-03-10 01:10:22.746751 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-10 01:10:22.746755 | orchestrator | Tuesday 10 March 2026 01:07:15 +0000 (0:00:07.365) 0:00:41.983 ********* 2026-03-10 01:10:22.746760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-10 01:10:22.746775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-10 01:10:22.746795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.746800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.746805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.746810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.746814 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:10:22.746819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-10 01:10:22.746979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-10 01:10:22.746993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.746997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.747001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.747005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.747009 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:10:22.747013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-10 01:10:22.747024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-10 01:10:22.747031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.747035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.747039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.747043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.747047 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:10:22.747051 | orchestrator | 2026-03-10 01:10:22.747054 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-10 01:10:22.747058 | orchestrator | Tuesday 10 March 2026 01:07:17 +0000 (0:00:01.488) 0:00:43.472 ********* 2026-03-10 01:10:22.747062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-10 01:10:22.747082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-10 01:10:22.747089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.747093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.747097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.747101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.747104 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:10:22.747108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-10 01:10:22.747120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-10 01:10:22.747130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-10 01:10:22.747137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.747141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.747145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.747152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-10 01:10:22.747158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.747162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.747169 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:10:22.747173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.747177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.747181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.747184 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:10:22.747188 | orchestrator | 2026-03-10 01:10:22.747192 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-10 01:10:22.747199 | orchestrator | Tuesday 10 March 2026 01:07:20 +0000 (0:00:03.438) 0:00:46.910 ********* 2026-03-10 01:10:22.747203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-10 01:10:22.747210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-10 01:10:22.747217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-10 01:10:22.747221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:10:22.747225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:10:22.747232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:10:22.747238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.747243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.747251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.747255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.747259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.747263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.747271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.747293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.747307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/2026-03-10 01:10:22 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:22.748003 | orchestrator | log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.748067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.748077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.748082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.748101 | orchestrator | 2026-03-10 01:10:22.748106 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-10 01:10:22.748111 | orchestrator | Tuesday 10 March 2026 01:07:27 +0000 (0:00:06.805) 0:00:53.716 ********* 2026-03-10 01:10:22.748117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-10 01:10:22.748123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-10 01:10:22.748140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-10 01:10:22.748145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:10:22.748155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:10:22.748160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:10:22.748164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.748169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.748180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.748188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.748197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.748210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.748217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.748225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.748236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.748248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.748253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.748266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.748271 | orchestrator | 2026-03-10 01:10:22.748296 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-10 01:10:22.748301 | orchestrator | Tuesday 10 March 2026 01:07:55 +0000 (0:00:27.971) 0:01:21.687 ********* 2026-03-10 01:10:22.748305 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-10 01:10:22.748311 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-10 01:10:22.748316 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-10 01:10:22.748320 | orchestrator | 2026-03-10 01:10:22.748324 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-10 01:10:22.748328 | orchestrator | Tuesday 10 March 2026 01:08:04 +0000 (0:00:09.236) 0:01:30.924 ********* 2026-03-10 01:10:22.748333 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-10 01:10:22.748337 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-10 01:10:22.748342 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-10 01:10:22.748346 | orchestrator | 2026-03-10 01:10:22.748351 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-10 01:10:22.748355 | orchestrator | Tuesday 10 March 2026 01:08:08 +0000 (0:00:03.336) 0:01:34.260 ********* 2026-03-10 01:10:22.748360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-10 01:10:22.748372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-10 01:10:22.748382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-10 01:10:22.748387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:10:22.748391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.748395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.748400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.748410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:10:22.748420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.748424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.748429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.748434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:10:22.748438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.748446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.748460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.748465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.748469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.748474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.748479 | orchestrator | 2026-03-10 01:10:22.748483 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-10 01:10:22.748487 | orchestrator | Tuesday 10 March 2026 01:08:12 +0000 (0:00:04.712) 0:01:38.973 ********* 2026-03-10 01:10:22.748492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-10 01:10:22.748505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-10 01:10:22.748515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-10 01:10:22.748520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:10:22.748524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.748529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.748534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.748746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:10:22.748767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.748777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.748785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.748793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:10:22.748801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.748809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.748836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.748845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.748852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.748860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.748868 | orchestrator | 2026-03-10 01:10:22.748875 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-10 01:10:22.748882 | orchestrator | Tuesday 10 March 2026 01:08:17 +0000 (0:00:04.271) 0:01:43.244 ********* 2026-03-10 01:10:22.748889 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:10:22.748897 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:10:22.748904 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:10:22.748910 | orchestrator | 2026-03-10 01:10:22.748919 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-10 01:10:22.748926 | orchestrator | Tuesday 10 March 2026 01:08:18 +0000 (0:00:00.850) 0:01:44.095 ********* 2026-03-10 01:10:22.748934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-10 01:10:22.748959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-10 01:10:22.748969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.748976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.748984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.748991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.748997 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:10:22.749005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-10 01:10:22.749030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-10 01:10:22.749039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.749047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.749077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.749088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.749095 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:10:22.749102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-10 01:10:22.749117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-10 01:10:22.749131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.749139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.749146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.749155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:10:22.749162 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:10:22.749175 | orchestrator | 2026-03-10 01:10:22.749183 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-03-10 01:10:22.749191 | orchestrator | Tuesday 10 March 2026 01:08:19 +0000 (0:00:01.163) 0:01:45.258 ********* 2026-03-10 01:10:22.749198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-10 01:10:22.749214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-10 01:10:22.749220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-10 01:10:22.749228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:10:22.749236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:10:22.749254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:10:22.749263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.749304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.749314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.749322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.749330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.749347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.749356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.749367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.749379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.749387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.749395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.749403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:10:22.749417 | orchestrator | 2026-03-10 01:10:22.749426 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-10 01:10:22.749431 | orchestrator | Tuesday 10 March 2026 01:08:25 +0000 (0:00:06.704) 0:01:51.963 ********* 2026-03-10 01:10:22.749436 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:10:22.749441 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:10:22.749446 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:10:22.749450 | orchestrator | 2026-03-10 01:10:22.749455 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-10 01:10:22.749460 | orchestrator | Tuesday 10 March 2026 01:08:26 +0000 (0:00:00.813) 0:01:52.776 ********* 2026-03-10 01:10:22.749466 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-10 01:10:22.749471 | orchestrator | 2026-03-10 01:10:22.749476 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-10 01:10:22.749481 | orchestrator | Tuesday 10 March 2026 01:08:29 +0000 (0:00:02.802) 0:01:55.579 ********* 2026-03-10 01:10:22.749486 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-10 01:10:22.749491 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-10 01:10:22.749495 | orchestrator | 2026-03-10 01:10:22.749500 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-10 01:10:22.749505 | orchestrator | Tuesday 10 March 2026 01:08:32 +0000 (0:00:02.891) 0:01:58.471 ********* 2026-03-10 01:10:22.749509 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:10:22.749513 | orchestrator | 2026-03-10 01:10:22.749518 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-10 01:10:22.749522 | orchestrator | Tuesday 10 March 2026 01:08:51 +0000 (0:00:18.873) 0:02:17.345 ********* 2026-03-10 01:10:22.749526 | orchestrator | 2026-03-10 01:10:22.749530 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-10 01:10:22.749535 | orchestrator | Tuesday 10 March 2026 01:08:51 +0000 (0:00:00.067) 0:02:17.412 ********* 2026-03-10 01:10:22.749539 | orchestrator | 2026-03-10 01:10:22.749544 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-10 01:10:22.749548 | orchestrator | Tuesday 10 March 2026 01:08:51 +0000 (0:00:00.123) 0:02:17.535 ********* 2026-03-10 01:10:22.749552 | orchestrator | 2026-03-10 01:10:22.749557 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-10 01:10:22.749566 | orchestrator | Tuesday 10 March 2026 01:08:51 +0000 (0:00:00.144) 0:02:17.680 ********* 2026-03-10 01:10:22.749571 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:10:22.749576 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:10:22.749580 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:10:22.749584 | orchestrator | 2026-03-10 01:10:22.749588 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-10 01:10:22.749597 | orchestrator | Tuesday 10 March 2026 01:09:08 +0000 (0:00:16.948) 0:02:34.628 ********* 2026-03-10 01:10:22.749601 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:10:22.749606 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:10:22.749611 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:10:22.749615 | orchestrator | 2026-03-10 01:10:22.749619 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-10 01:10:22.749623 | orchestrator | Tuesday 10 March 2026 01:09:24 +0000 (0:00:15.858) 0:02:50.487 ********* 2026-03-10 01:10:22.749628 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:10:22.749632 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:10:22.749636 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:10:22.749645 | orchestrator | 2026-03-10 01:10:22.749650 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-10 01:10:22.749655 | orchestrator | Tuesday 10 March 2026 01:09:36 +0000 (0:00:12.124) 0:03:02.612 ********* 2026-03-10 01:10:22.749659 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:10:22.749664 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:10:22.749668 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:10:22.749672 | orchestrator | 2026-03-10 01:10:22.749677 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-10 01:10:22.749681 | orchestrator | Tuesday 10 March 2026 01:09:50 +0000 (0:00:13.862) 0:03:16.475 ********* 2026-03-10 01:10:22.749685 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:10:22.749690 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:10:22.749694 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:10:22.749698 | orchestrator | 2026-03-10 01:10:22.749702 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-10 01:10:22.749707 | orchestrator | Tuesday 10 March 2026 01:10:03 +0000 (0:00:13.523) 0:03:29.998 ********* 2026-03-10 01:10:22.749711 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:10:22.749716 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:10:22.749720 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:10:22.749724 | orchestrator | 2026-03-10 01:10:22.749729 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-10 01:10:22.749733 | orchestrator | Tuesday 10 March 2026 01:10:12 +0000 (0:00:08.117) 0:03:38.116 ********* 2026-03-10 01:10:22.749737 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:10:22.749741 | orchestrator | 2026-03-10 01:10:22.749746 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:10:22.749750 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-10 01:10:22.749756 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-10 01:10:22.749760 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-10 01:10:22.749765 | orchestrator | 2026-03-10 01:10:22.749769 | orchestrator | 2026-03-10 01:10:22.749773 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:10:22.749778 | orchestrator | Tuesday 10 March 2026 01:10:19 +0000 (0:00:07.652) 0:03:45.768 ********* 2026-03-10 01:10:22.749782 | orchestrator | =============================================================================== 2026-03-10 01:10:22.749787 | orchestrator | designate : Copying over designate.conf -------------------------------- 27.97s 2026-03-10 01:10:22.749791 | orchestrator | designate : Running Designate bootstrap container ---------------------- 18.87s 2026-03-10 01:10:22.749795 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 16.95s 2026-03-10 01:10:22.749800 | orchestrator | designate : Restart designate-api container ---------------------------- 15.86s 2026-03-10 01:10:22.749804 | orchestrator | designate : Restart designate-producer container ----------------------- 13.86s 2026-03-10 01:10:22.749808 | orchestrator | designate : Restart designate-mdns container --------------------------- 13.52s 2026-03-10 01:10:22.749813 | orchestrator | designate : Restart designate-central container ------------------------ 12.12s 2026-03-10 01:10:22.749817 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 9.24s 2026-03-10 01:10:22.749821 | orchestrator | designate : Restart designate-worker container -------------------------- 8.12s 2026-03-10 01:10:22.749826 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.65s 2026-03-10 01:10:22.749830 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.37s 2026-03-10 01:10:22.749835 | orchestrator | designate : Copying over config.json files for services ----------------- 6.81s 2026-03-10 01:10:22.749839 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.76s 2026-03-10 01:10:22.749848 | orchestrator | designate : Check designate containers ---------------------------------- 6.70s 2026-03-10 01:10:22.749853 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 4.71s 2026-03-10 01:10:22.749857 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.71s 2026-03-10 01:10:22.749861 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.46s 2026-03-10 01:10:22.749865 | orchestrator | designate : Copying over rndc.key --------------------------------------- 4.27s 2026-03-10 01:10:22.749870 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 4.01s 2026-03-10 01:10:22.749874 | orchestrator | service-ks-register : designate | Creating services --------------------- 4.00s 2026-03-10 01:10:25.776551 | orchestrator | 2026-03-10 01:10:25 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:10:25.777415 | orchestrator | 2026-03-10 01:10:25 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:10:25.778395 | orchestrator | 2026-03-10 01:10:25 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:10:25.781397 | orchestrator | 2026-03-10 01:10:25 | INFO  | Task 5474029a-fea8-4dcb-99a1-65801f4edc6c is in state STARTED 2026-03-10 01:10:25.781484 | orchestrator | 2026-03-10 01:10:25 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:28.815538 | orchestrator | 2026-03-10 01:10:28 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:10:28.825608 | orchestrator | 2026-03-10 01:10:28 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:10:28.828171 | orchestrator | 2026-03-10 01:10:28 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:10:28.829563 | orchestrator | 2026-03-10 01:10:28 | INFO  | Task 5474029a-fea8-4dcb-99a1-65801f4edc6c is in state STARTED 2026-03-10 01:10:28.829599 | orchestrator | 2026-03-10 01:10:28 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:31.860141 | orchestrator | 2026-03-10 01:10:31 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:10:31.860839 | orchestrator | 2026-03-10 01:10:31 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:10:31.861575 | orchestrator | 2026-03-10 01:10:31 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:10:31.862311 | orchestrator | 2026-03-10 01:10:31 | INFO  | Task 5474029a-fea8-4dcb-99a1-65801f4edc6c is in state STARTED 2026-03-10 01:10:31.862368 | orchestrator | 2026-03-10 01:10:31 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:34.893956 | orchestrator | 2026-03-10 01:10:34 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:10:34.894720 | orchestrator | 2026-03-10 01:10:34 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:10:34.895627 | orchestrator | 2026-03-10 01:10:34 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:10:34.896577 | orchestrator | 2026-03-10 01:10:34 | INFO  | Task 5474029a-fea8-4dcb-99a1-65801f4edc6c is in state STARTED 2026-03-10 01:10:34.896617 | orchestrator | 2026-03-10 01:10:34 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:37.934222 | orchestrator | 2026-03-10 01:10:37 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:10:37.935232 | orchestrator | 2026-03-10 01:10:37 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:10:37.936980 | orchestrator | 2026-03-10 01:10:37 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:10:37.938538 | orchestrator | 2026-03-10 01:10:37 | INFO  | Task 5474029a-fea8-4dcb-99a1-65801f4edc6c is in state STARTED 2026-03-10 01:10:37.938612 | orchestrator | 2026-03-10 01:10:37 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:40.978242 | orchestrator | 2026-03-10 01:10:40 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:10:40.979821 | orchestrator | 2026-03-10 01:10:40 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:10:40.981468 | orchestrator | 2026-03-10 01:10:40 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:10:40.982311 | orchestrator | 2026-03-10 01:10:40 | INFO  | Task 5474029a-fea8-4dcb-99a1-65801f4edc6c is in state STARTED 2026-03-10 01:10:40.982348 | orchestrator | 2026-03-10 01:10:40 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:44.048715 | orchestrator | 2026-03-10 01:10:44 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:10:44.048834 | orchestrator | 2026-03-10 01:10:44 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:10:44.049232 | orchestrator | 2026-03-10 01:10:44 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:10:44.050375 | orchestrator | 2026-03-10 01:10:44 | INFO  | Task 5474029a-fea8-4dcb-99a1-65801f4edc6c is in state STARTED 2026-03-10 01:10:44.050441 | orchestrator | 2026-03-10 01:10:44 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:47.094924 | orchestrator | 2026-03-10 01:10:47 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:10:47.096539 | orchestrator | 2026-03-10 01:10:47 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:10:47.099157 | orchestrator | 2026-03-10 01:10:47 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:10:47.102307 | orchestrator | 2026-03-10 01:10:47 | INFO  | Task 5474029a-fea8-4dcb-99a1-65801f4edc6c is in state STARTED 2026-03-10 01:10:47.102366 | orchestrator | 2026-03-10 01:10:47 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:50.138960 | orchestrator | 2026-03-10 01:10:50 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:10:50.141178 | orchestrator | 2026-03-10 01:10:50 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:10:50.143398 | orchestrator | 2026-03-10 01:10:50 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:10:50.144642 | orchestrator | 2026-03-10 01:10:50 | INFO  | Task 5474029a-fea8-4dcb-99a1-65801f4edc6c is in state STARTED 2026-03-10 01:10:50.144684 | orchestrator | 2026-03-10 01:10:50 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:53.176304 | orchestrator | 2026-03-10 01:10:53 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:10:53.179637 | orchestrator | 2026-03-10 01:10:53 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:10:53.181516 | orchestrator | 2026-03-10 01:10:53 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:10:53.184125 | orchestrator | 2026-03-10 01:10:53 | INFO  | Task 5474029a-fea8-4dcb-99a1-65801f4edc6c is in state STARTED 2026-03-10 01:10:53.184202 | orchestrator | 2026-03-10 01:10:53 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:56.265580 | orchestrator | 2026-03-10 01:10:56 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:10:56.266416 | orchestrator | 2026-03-10 01:10:56 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:10:56.271967 | orchestrator | 2026-03-10 01:10:56 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:10:56.272558 | orchestrator | 2026-03-10 01:10:56 | INFO  | Task 5474029a-fea8-4dcb-99a1-65801f4edc6c is in state STARTED 2026-03-10 01:10:56.272601 | orchestrator | 2026-03-10 01:10:56 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:59.317334 | orchestrator | 2026-03-10 01:10:59 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:10:59.318744 | orchestrator | 2026-03-10 01:10:59 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:10:59.318906 | orchestrator | 2026-03-10 01:10:59 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:10:59.319397 | orchestrator | 2026-03-10 01:10:59 | INFO  | Task 5474029a-fea8-4dcb-99a1-65801f4edc6c is in state STARTED 2026-03-10 01:10:59.319446 | orchestrator | 2026-03-10 01:10:59 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:11:02.377433 | orchestrator | 2026-03-10 01:11:02 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:11:02.377813 | orchestrator | 2026-03-10 01:11:02 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:11:02.378564 | orchestrator | 2026-03-10 01:11:02 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:11:02.379531 | orchestrator | 2026-03-10 01:11:02 | INFO  | Task 5474029a-fea8-4dcb-99a1-65801f4edc6c is in state STARTED 2026-03-10 01:11:02.379580 | orchestrator | 2026-03-10 01:11:02 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:11:05.406715 | orchestrator | 2026-03-10 01:11:05 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:11:05.407425 | orchestrator | 2026-03-10 01:11:05 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:11:05.408627 | orchestrator | 2026-03-10 01:11:05 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:11:05.409770 | orchestrator | 2026-03-10 01:11:05 | INFO  | Task 5474029a-fea8-4dcb-99a1-65801f4edc6c is in state STARTED 2026-03-10 01:11:05.409838 | orchestrator | 2026-03-10 01:11:05 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:11:08.442566 | orchestrator | 2026-03-10 01:11:08 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:11:08.442618 | orchestrator | 2026-03-10 01:11:08 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:11:08.443165 | orchestrator | 2026-03-10 01:11:08 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:11:08.444163 | orchestrator | 2026-03-10 01:11:08 | INFO  | Task 5474029a-fea8-4dcb-99a1-65801f4edc6c is in state STARTED 2026-03-10 01:11:08.444204 | orchestrator | 2026-03-10 01:11:08 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:11:11.482231 | orchestrator | 2026-03-10 01:11:11 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:11:11.482743 | orchestrator | 2026-03-10 01:11:11 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:11:11.483473 | orchestrator | 2026-03-10 01:11:11 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:11:11.484479 | orchestrator | 2026-03-10 01:11:11 | INFO  | Task 5474029a-fea8-4dcb-99a1-65801f4edc6c is in state STARTED 2026-03-10 01:11:11.484545 | orchestrator | 2026-03-10 01:11:11 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:11:14.527081 | orchestrator | 2026-03-10 01:11:14 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:11:14.527165 | orchestrator | 2026-03-10 01:11:14 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:11:14.527997 | orchestrator | 2026-03-10 01:11:14 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:11:14.529354 | orchestrator | 2026-03-10 01:11:14 | INFO  | Task 5474029a-fea8-4dcb-99a1-65801f4edc6c is in state STARTED 2026-03-10 01:11:14.529468 | orchestrator | 2026-03-10 01:11:14 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:11:17.573361 | orchestrator | 2026-03-10 01:11:17 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:11:17.575344 | orchestrator | 2026-03-10 01:11:17 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:11:17.578681 | orchestrator | 2026-03-10 01:11:17 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:11:17.580509 | orchestrator | 2026-03-10 01:11:17 | INFO  | Task 5474029a-fea8-4dcb-99a1-65801f4edc6c is in state STARTED 2026-03-10 01:11:17.580562 | orchestrator | 2026-03-10 01:11:17 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:11:20.619569 | orchestrator | 2026-03-10 01:11:20 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:11:20.620397 | orchestrator | 2026-03-10 01:11:20 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:11:20.621678 | orchestrator | 2026-03-10 01:11:20 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:11:20.622903 | orchestrator | 2026-03-10 01:11:20 | INFO  | Task 5474029a-fea8-4dcb-99a1-65801f4edc6c is in state STARTED 2026-03-10 01:11:20.622952 | orchestrator | 2026-03-10 01:11:20 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:11:23.663093 | orchestrator | 2026-03-10 01:11:23 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:11:23.663681 | orchestrator | 2026-03-10 01:11:23 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:11:23.664596 | orchestrator | 2026-03-10 01:11:23 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:11:23.665796 | orchestrator | 2026-03-10 01:11:23 | INFO  | Task 5474029a-fea8-4dcb-99a1-65801f4edc6c is in state STARTED 2026-03-10 01:11:23.665821 | orchestrator | 2026-03-10 01:11:23 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:11:26.706934 | orchestrator | 2026-03-10 01:11:26 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:11:26.709186 | orchestrator | 2026-03-10 01:11:26 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:11:26.710981 | orchestrator | 2026-03-10 01:11:26 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:11:26.712843 | orchestrator | 2026-03-10 01:11:26 | INFO  | Task 5474029a-fea8-4dcb-99a1-65801f4edc6c is in state STARTED 2026-03-10 01:11:26.712894 | orchestrator | 2026-03-10 01:11:26 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:11:29.758627 | orchestrator | 2026-03-10 01:11:29 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:11:29.759603 | orchestrator | 2026-03-10 01:11:29 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:11:29.761158 | orchestrator | 2026-03-10 01:11:29 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:11:29.763108 | orchestrator | 2026-03-10 01:11:29 | INFO  | Task 5474029a-fea8-4dcb-99a1-65801f4edc6c is in state STARTED 2026-03-10 01:11:29.763148 | orchestrator | 2026-03-10 01:11:29 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:11:32.798916 | orchestrator | 2026-03-10 01:11:32 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:11:32.799933 | orchestrator | 2026-03-10 01:11:32 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:11:32.801113 | orchestrator | 2026-03-10 01:11:32 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:11:32.802803 | orchestrator | 2026-03-10 01:11:32 | INFO  | Task 5474029a-fea8-4dcb-99a1-65801f4edc6c is in state STARTED 2026-03-10 01:11:32.802832 | orchestrator | 2026-03-10 01:11:32 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:11:35.839026 | orchestrator | 2026-03-10 01:11:35 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:11:35.840964 | orchestrator | 2026-03-10 01:11:35 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:11:35.842533 | orchestrator | 2026-03-10 01:11:35 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:11:35.844465 | orchestrator | 2026-03-10 01:11:35 | INFO  | Task 5474029a-fea8-4dcb-99a1-65801f4edc6c is in state STARTED 2026-03-10 01:11:35.844505 | orchestrator | 2026-03-10 01:11:35 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:11:38.875200 | orchestrator | 2026-03-10 01:11:38 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:11:38.876215 | orchestrator | 2026-03-10 01:11:38 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:11:38.877518 | orchestrator | 2026-03-10 01:11:38 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:11:38.879326 | orchestrator | 2026-03-10 01:11:38 | INFO  | Task 5474029a-fea8-4dcb-99a1-65801f4edc6c is in state STARTED 2026-03-10 01:11:38.879376 | orchestrator | 2026-03-10 01:11:38 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:11:41.913552 | orchestrator | 2026-03-10 01:11:41 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:11:41.914110 | orchestrator | 2026-03-10 01:11:41 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:11:41.915283 | orchestrator | 2026-03-10 01:11:41 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:11:41.916303 | orchestrator | 2026-03-10 01:11:41 | INFO  | Task 5474029a-fea8-4dcb-99a1-65801f4edc6c is in state STARTED 2026-03-10 01:11:41.916325 | orchestrator | 2026-03-10 01:11:41 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:11:44.957355 | orchestrator | 2026-03-10 01:11:44 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:11:44.959289 | orchestrator | 2026-03-10 01:11:44 | INFO  | Task efb2f095-8cf4-4dfd-853a-2084eae28fae is in state STARTED 2026-03-10 01:11:44.959783 | orchestrator | 2026-03-10 01:11:44 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:11:44.960805 | orchestrator | 2026-03-10 01:11:44 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:11:44.962283 | orchestrator | 2026-03-10 01:11:44 | INFO  | Task 5474029a-fea8-4dcb-99a1-65801f4edc6c is in state SUCCESS 2026-03-10 01:11:44.962319 | orchestrator | 2026-03-10 01:11:44 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:11:44.963471 | orchestrator | 2026-03-10 01:11:44.963518 | orchestrator | 2026-03-10 01:11:44.963524 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 01:11:44.963528 | orchestrator | 2026-03-10 01:11:44.963533 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 01:11:44.963537 | orchestrator | Tuesday 10 March 2026 01:10:26 +0000 (0:00:00.338) 0:00:00.338 ********* 2026-03-10 01:11:44.963541 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:11:44.963546 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:11:44.963550 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:11:44.963553 | orchestrator | 2026-03-10 01:11:44.963558 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 01:11:44.963562 | orchestrator | Tuesday 10 March 2026 01:10:27 +0000 (0:00:00.406) 0:00:00.745 ********* 2026-03-10 01:11:44.963567 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-10 01:11:44.963571 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-10 01:11:44.963575 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-10 01:11:44.963578 | orchestrator | 2026-03-10 01:11:44.963582 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-10 01:11:44.963586 | orchestrator | 2026-03-10 01:11:44.963590 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-10 01:11:44.963613 | orchestrator | Tuesday 10 March 2026 01:10:28 +0000 (0:00:00.944) 0:00:01.689 ********* 2026-03-10 01:11:44.963618 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:11:44.963623 | orchestrator | 2026-03-10 01:11:44.963626 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-03-10 01:11:44.963630 | orchestrator | Tuesday 10 March 2026 01:10:29 +0000 (0:00:01.539) 0:00:03.229 ********* 2026-03-10 01:11:44.963634 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-10 01:11:44.963638 | orchestrator | 2026-03-10 01:11:44.963642 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-03-10 01:11:44.963645 | orchestrator | Tuesday 10 March 2026 01:10:33 +0000 (0:00:04.104) 0:00:07.334 ********* 2026-03-10 01:11:44.963649 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-10 01:11:44.963653 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-10 01:11:44.963657 | orchestrator | 2026-03-10 01:11:44.963661 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-10 01:11:44.963665 | orchestrator | Tuesday 10 March 2026 01:10:40 +0000 (0:00:06.831) 0:00:14.165 ********* 2026-03-10 01:11:44.963668 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-10 01:11:44.963672 | orchestrator | 2026-03-10 01:11:44.963676 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-10 01:11:44.963680 | orchestrator | Tuesday 10 March 2026 01:10:44 +0000 (0:00:03.789) 0:00:17.955 ********* 2026-03-10 01:11:44.963684 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-10 01:11:44.963687 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-10 01:11:44.963691 | orchestrator | 2026-03-10 01:11:44.963695 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-10 01:11:44.963699 | orchestrator | Tuesday 10 March 2026 01:10:48 +0000 (0:00:04.219) 0:00:22.175 ********* 2026-03-10 01:11:44.963703 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-10 01:11:44.963707 | orchestrator | 2026-03-10 01:11:44.963711 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-03-10 01:11:44.963715 | orchestrator | Tuesday 10 March 2026 01:10:52 +0000 (0:00:03.971) 0:00:26.146 ********* 2026-03-10 01:11:44.963719 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-10 01:11:44.963722 | orchestrator | 2026-03-10 01:11:44.963726 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-10 01:11:44.963734 | orchestrator | Tuesday 10 March 2026 01:10:57 +0000 (0:00:04.757) 0:00:30.903 ********* 2026-03-10 01:11:44.963738 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:44.963742 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:11:44.963745 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:11:44.963749 | orchestrator | 2026-03-10 01:11:44.963753 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-10 01:11:44.963756 | orchestrator | Tuesday 10 March 2026 01:10:58 +0000 (0:00:00.847) 0:00:31.751 ********* 2026-03-10 01:11:44.963764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-10 01:11:44.963779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-10 01:11:44.963805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-10 01:11:44.963809 | orchestrator | 2026-03-10 01:11:44.963813 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-10 01:11:44.963817 | orchestrator | Tuesday 10 March 2026 01:10:59 +0000 (0:00:01.530) 0:00:33.281 ********* 2026-03-10 01:11:44.963822 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:44.963825 | orchestrator | 2026-03-10 01:11:44.963829 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-10 01:11:44.963833 | orchestrator | Tuesday 10 March 2026 01:11:00 +0000 (0:00:00.237) 0:00:33.519 ********* 2026-03-10 01:11:44.963837 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:44.963841 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:11:44.963848 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:11:44.963852 | orchestrator | 2026-03-10 01:11:44.963855 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-10 01:11:44.963859 | orchestrator | Tuesday 10 March 2026 01:11:00 +0000 (0:00:00.542) 0:00:34.061 ********* 2026-03-10 01:11:44.963863 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:11:44.963867 | orchestrator | 2026-03-10 01:11:44.963871 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-10 01:11:44.963875 | orchestrator | Tuesday 10 March 2026 01:11:01 +0000 (0:00:01.145) 0:00:35.207 ********* 2026-03-10 01:11:44.963879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-10 01:11:44.963888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-10 01:11:44.963899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-10 01:11:44.963904 | orchestrator | 2026-03-10 01:11:44.963910 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-10 01:11:44.963916 | orchestrator | Tuesday 10 March 2026 01:11:04 +0000 (0:00:02.238) 0:00:37.445 ********* 2026-03-10 01:11:44.963922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-10 01:11:44.963933 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:11:44.963939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-10 01:11:44.963945 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:11:44.963956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-10 01:11:44.963963 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:44.963969 | orchestrator | 2026-03-10 01:11:44.963974 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-10 01:11:44.963981 | orchestrator | Tuesday 10 March 2026 01:11:05 +0000 (0:00:01.071) 0:00:38.517 ********* 2026-03-10 01:11:44.963991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-10 01:11:44.963998 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:44.964004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-10 01:11:44.964015 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:11:44.964022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-10 01:11:44.964028 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:11:44.964035 | orchestrator | 2026-03-10 01:11:44.964041 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-10 01:11:44.964047 | orchestrator | Tuesday 10 March 2026 01:11:06 +0000 (0:00:01.207) 0:00:39.724 ********* 2026-03-10 01:11:44.964061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-10 01:11:44.964071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-10 01:11:44.964083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-10 01:11:44.964089 | orchestrator | 2026-03-10 01:11:44.964095 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-10 01:11:44.964102 | orchestrator | Tuesday 10 March 2026 01:11:07 +0000 (0:00:01.620) 0:00:41.345 ********* 2026-03-10 01:11:44.964108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-10 01:11:44.964114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-10 01:11:44.964126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-10 01:11:44.964132 | orchestrator | 2026-03-10 01:11:44.964141 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-10 01:11:44.964152 | orchestrator | Tuesday 10 March 2026 01:11:10 +0000 (0:00:02.694) 0:00:44.039 ********* 2026-03-10 01:11:44.964158 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-10 01:11:44.964164 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-10 01:11:44.964170 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-10 01:11:44.964175 | orchestrator | 2026-03-10 01:11:44.964181 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-10 01:11:44.964187 | orchestrator | Tuesday 10 March 2026 01:11:12 +0000 (0:00:01.614) 0:00:45.654 ********* 2026-03-10 01:11:44.964193 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:11:44.964200 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:11:44.964206 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:11:44.964212 | orchestrator | 2026-03-10 01:11:44.964218 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-10 01:11:44.964224 | orchestrator | Tuesday 10 March 2026 01:11:13 +0000 (0:00:01.371) 0:00:47.025 ********* 2026-03-10 01:11:44.964231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-10 01:11:44.964259 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:44.964267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-10 01:11:44.964273 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:11:44.964284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-10 01:11:44.964295 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:11:44.964299 | orchestrator | 2026-03-10 01:11:44.964302 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-03-10 01:11:44.964306 | orchestrator | Tuesday 10 March 2026 01:11:14 +0000 (0:00:00.522) 0:00:47.548 ********* 2026-03-10 01:11:44.964315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-10 01:11:44.964320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-10 01:11:44.964324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-10 01:11:44.964328 | orchestrator | 2026-03-10 01:11:44.964331 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-10 01:11:44.964335 | orchestrator | Tuesday 10 March 2026 01:11:15 +0000 (0:00:01.299) 0:00:48.848 ********* 2026-03-10 01:11:44.964339 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:11:44.964343 | orchestrator | 2026-03-10 01:11:44.964346 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-10 01:11:44.964350 | orchestrator | Tuesday 10 March 2026 01:11:18 +0000 (0:00:02.850) 0:00:51.698 ********* 2026-03-10 01:11:44.964354 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:11:44.964358 | orchestrator | 2026-03-10 01:11:44.964361 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-10 01:11:44.964370 | orchestrator | Tuesday 10 March 2026 01:11:20 +0000 (0:00:02.587) 0:00:54.285 ********* 2026-03-10 01:11:44.964376 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:11:44.964380 | orchestrator | 2026-03-10 01:11:44.964384 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-10 01:11:44.964387 | orchestrator | Tuesday 10 March 2026 01:11:36 +0000 (0:00:15.388) 0:01:09.674 ********* 2026-03-10 01:11:44.964391 | orchestrator | 2026-03-10 01:11:44.964395 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-10 01:11:44.964399 | orchestrator | Tuesday 10 March 2026 01:11:36 +0000 (0:00:00.091) 0:01:09.766 ********* 2026-03-10 01:11:44.964402 | orchestrator | 2026-03-10 01:11:44.964406 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-10 01:11:44.964410 | orchestrator | Tuesday 10 March 2026 01:11:36 +0000 (0:00:00.076) 0:01:09.843 ********* 2026-03-10 01:11:44.964414 | orchestrator | 2026-03-10 01:11:44.964417 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-10 01:11:44.964421 | orchestrator | Tuesday 10 March 2026 01:11:36 +0000 (0:00:00.082) 0:01:09.925 ********* 2026-03-10 01:11:44.964425 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:11:44.964428 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:11:44.964432 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:11:44.964436 | orchestrator | 2026-03-10 01:11:44.964440 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:11:44.964447 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-10 01:11:44.964452 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-10 01:11:44.964456 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-10 01:11:44.964460 | orchestrator | 2026-03-10 01:11:44.964463 | orchestrator | 2026-03-10 01:11:44.964467 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:11:44.964471 | orchestrator | Tuesday 10 March 2026 01:11:42 +0000 (0:00:06.318) 0:01:16.244 ********* 2026-03-10 01:11:44.964475 | orchestrator | =============================================================================== 2026-03-10 01:11:44.964478 | orchestrator | placement : Running placement bootstrap container ---------------------- 15.39s 2026-03-10 01:11:44.964482 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.83s 2026-03-10 01:11:44.964486 | orchestrator | placement : Restart placement-api container ----------------------------- 6.32s 2026-03-10 01:11:44.964489 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.76s 2026-03-10 01:11:44.964493 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.22s 2026-03-10 01:11:44.964497 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.10s 2026-03-10 01:11:44.964501 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.97s 2026-03-10 01:11:44.964505 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.79s 2026-03-10 01:11:44.964510 | orchestrator | placement : Creating placement databases -------------------------------- 2.85s 2026-03-10 01:11:44.964517 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.69s 2026-03-10 01:11:44.964523 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.59s 2026-03-10 01:11:44.964528 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 2.24s 2026-03-10 01:11:44.964534 | orchestrator | placement : Copying over config.json files for services ----------------- 1.62s 2026-03-10 01:11:44.964541 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.61s 2026-03-10 01:11:44.964547 | orchestrator | placement : include_tasks ----------------------------------------------- 1.54s 2026-03-10 01:11:44.964558 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.53s 2026-03-10 01:11:44.964564 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.37s 2026-03-10 01:11:44.964570 | orchestrator | placement : Check placement containers ---------------------------------- 1.30s 2026-03-10 01:11:44.964576 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.21s 2026-03-10 01:11:44.964581 | orchestrator | placement : include_tasks ----------------------------------------------- 1.15s 2026-03-10 01:11:48.027384 | orchestrator | 2026-03-10 01:11:48 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:11:48.030272 | orchestrator | 2026-03-10 01:11:48 | INFO  | Task efb2f095-8cf4-4dfd-853a-2084eae28fae is in state STARTED 2026-03-10 01:11:48.033558 | orchestrator | 2026-03-10 01:11:48 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:11:48.033610 | orchestrator | 2026-03-10 01:11:48 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:11:48.033619 | orchestrator | 2026-03-10 01:11:48 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:11:51.073124 | orchestrator | 2026-03-10 01:11:51 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:11:51.075144 | orchestrator | 2026-03-10 01:11:51 | INFO  | Task efb2f095-8cf4-4dfd-853a-2084eae28fae is in state STARTED 2026-03-10 01:11:51.077010 | orchestrator | 2026-03-10 01:11:51 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:11:51.078747 | orchestrator | 2026-03-10 01:11:51 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state STARTED 2026-03-10 01:11:51.078797 | orchestrator | 2026-03-10 01:11:51 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:13:54.221449 | orchestrator | 2026-03-10 01:13:54 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:13:54.221592 | orchestrator | 2026-03-10 01:13:54 | INFO  | Task efb2f095-8cf4-4dfd-853a-2084eae28fae is in state STARTED 2026-03-10 01:13:54.221602 | orchestrator | 2026-03-10 01:13:54 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:13:54.224052 | orchestrator | 2026-03-10 01:13:54 | INFO  | Task b61413bd-a5fb-441a-b15a-0e6412ce647e is in state SUCCESS 2026-03-10 01:13:54.225644 | orchestrator | 2026-03-10 01:13:54.225697 | orchestrator | 2026-03-10 01:13:54.225706 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 01:13:54.225716 | orchestrator | 2026-03-10 01:13:54.225723 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 01:13:54.225752 | orchestrator | Tuesday 10 March 2026 01:06:17 +0000 (0:00:00.308) 0:00:00.308 ********* 2026-03-10 01:13:54.225759 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:13:54.225768 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:13:54.225776 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:13:54.225783 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:13:54.225790 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:13:54.225797 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:13:54.225804 | orchestrator | 2026-03-10 01:13:54.225811 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 01:13:54.225818 | orchestrator | Tuesday 10 March 2026 01:06:17 +0000 (0:00:00.794) 0:00:01.102 ********* 2026-03-10 01:13:54.225825 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-10 01:13:54.225832 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-10 01:13:54.225839 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-10 01:13:54.225846 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-10 01:13:54.225853 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-10 01:13:54.225906 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-10 01:13:54.225913 | orchestrator | 2026-03-10 01:13:54.225920 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-10 01:13:54.225926 | orchestrator | 2026-03-10 01:13:54.225933 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-10 01:13:54.225940 | orchestrator | Tuesday 10 March 2026 01:06:18 +0000 (0:00:00.711) 0:00:01.814 ********* 2026-03-10 01:13:54.225949 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:13:54.225988 | orchestrator | 2026-03-10 01:13:54.225996 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-10 01:13:54.226003 | orchestrator | Tuesday 10 March 2026 01:06:19 +0000 (0:00:01.431) 0:00:03.245 ********* 2026-03-10 01:13:54.226010 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:13:54.226265 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:13:54.226276 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:13:54.226283 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:13:54.226290 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:13:54.226297 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:13:54.226304 | orchestrator | 2026-03-10 01:13:54.226312 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-10 01:13:54.226320 | orchestrator | Tuesday 10 March 2026 01:06:22 +0000 (0:00:02.478) 0:00:05.724 ********* 2026-03-10 01:13:54.226327 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:13:54.226334 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:13:54.226341 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:13:54.226358 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:13:54.226365 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:13:54.226371 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:13:54.226413 | orchestrator | 2026-03-10 01:13:54.226420 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-10 01:13:54.226427 | orchestrator | Tuesday 10 March 2026 01:06:24 +0000 (0:00:01.758) 0:00:07.483 ********* 2026-03-10 01:13:54.226434 | orchestrator | ok: [testbed-node-0] => { 2026-03-10 01:13:54.226443 | orchestrator |  "changed": false, 2026-03-10 01:13:54.226450 | orchestrator |  "msg": "All assertions passed" 2026-03-10 01:13:54.226457 | orchestrator | } 2026-03-10 01:13:54.226465 | orchestrator | ok: [testbed-node-1] => { 2026-03-10 01:13:54.226472 | orchestrator |  "changed": false, 2026-03-10 01:13:54.226479 | orchestrator |  "msg": "All assertions passed" 2026-03-10 01:13:54.226487 | orchestrator | } 2026-03-10 01:13:54.226494 | orchestrator | ok: [testbed-node-2] => { 2026-03-10 01:13:54.226502 | orchestrator |  "changed": false, 2026-03-10 01:13:54.226509 | orchestrator |  "msg": "All assertions passed" 2026-03-10 01:13:54.226516 | orchestrator | } 2026-03-10 01:13:54.226523 | orchestrator | ok: [testbed-node-3] => { 2026-03-10 01:13:54.226531 | orchestrator |  "changed": false, 2026-03-10 01:13:54.226538 | orchestrator |  "msg": "All assertions passed" 2026-03-10 01:13:54.226545 | orchestrator | } 2026-03-10 01:13:54.226553 | orchestrator | ok: [testbed-node-4] => { 2026-03-10 01:13:54.226560 | orchestrator |  "changed": false, 2026-03-10 01:13:54.226567 | orchestrator |  "msg": "All assertions passed" 2026-03-10 01:13:54.226575 | orchestrator | } 2026-03-10 01:13:54.226582 | orchestrator | ok: [testbed-node-5] => { 2026-03-10 01:13:54.226589 | orchestrator |  "changed": false, 2026-03-10 01:13:54.226597 | orchestrator |  "msg": "All assertions passed" 2026-03-10 01:13:54.226604 | orchestrator | } 2026-03-10 01:13:54.226611 | orchestrator | 2026-03-10 01:13:54.226619 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-10 01:13:54.226626 | orchestrator | Tuesday 10 March 2026 01:06:25 +0000 (0:00:01.106) 0:00:08.590 ********* 2026-03-10 01:13:54.226633 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:13:54.226641 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:13:54.226648 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:13:54.226667 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:13:54.226674 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:13:54.226681 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:13:54.226689 | orchestrator | 2026-03-10 01:13:54.226696 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-03-10 01:13:54.226703 | orchestrator | Tuesday 10 March 2026 01:06:26 +0000 (0:00:00.703) 0:00:09.293 ********* 2026-03-10 01:13:54.226710 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-10 01:13:54.226717 | orchestrator | 2026-03-10 01:13:54.226724 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-03-10 01:13:54.226731 | orchestrator | Tuesday 10 March 2026 01:06:29 +0000 (0:00:03.705) 0:00:12.999 ********* 2026-03-10 01:13:54.226738 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-10 01:13:54.226747 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-10 01:13:54.226754 | orchestrator | 2026-03-10 01:13:54.226778 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-10 01:13:54.226786 | orchestrator | Tuesday 10 March 2026 01:06:37 +0000 (0:00:07.331) 0:00:20.330 ********* 2026-03-10 01:13:54.226793 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-10 01:13:54.226800 | orchestrator | 2026-03-10 01:13:54.226816 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-10 01:13:54.226823 | orchestrator | Tuesday 10 March 2026 01:06:40 +0000 (0:00:03.822) 0:00:24.153 ********* 2026-03-10 01:13:54.226830 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-10 01:13:54.226837 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-10 01:13:54.226844 | orchestrator | 2026-03-10 01:13:54.226851 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-10 01:13:54.226857 | orchestrator | Tuesday 10 March 2026 01:06:44 +0000 (0:00:03.908) 0:00:28.062 ********* 2026-03-10 01:13:54.226863 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-10 01:13:54.226869 | orchestrator | 2026-03-10 01:13:54.226876 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-03-10 01:13:54.226882 | orchestrator | Tuesday 10 March 2026 01:06:48 +0000 (0:00:03.700) 0:00:31.762 ********* 2026-03-10 01:13:54.226889 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-10 01:13:54.226895 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-10 01:13:54.226903 | orchestrator | 2026-03-10 01:13:54.226909 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-10 01:13:54.226916 | orchestrator | Tuesday 10 March 2026 01:06:57 +0000 (0:00:08.568) 0:00:40.331 ********* 2026-03-10 01:13:54.226923 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:13:54.226930 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:13:54.226937 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:13:54.226944 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:13:54.226950 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:13:54.226957 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:13:54.226964 | orchestrator | 2026-03-10 01:13:54.226971 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-10 01:13:54.226978 | orchestrator | Tuesday 10 March 2026 01:06:57 +0000 (0:00:00.830) 0:00:41.161 ********* 2026-03-10 01:13:54.226984 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:13:54.226991 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:13:54.226998 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:13:54.227005 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:13:54.227011 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:13:54.227018 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:13:54.227024 | orchestrator | 2026-03-10 01:13:54.227031 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-10 01:13:54.227038 | orchestrator | Tuesday 10 March 2026 01:07:00 +0000 (0:00:02.741) 0:00:43.903 ********* 2026-03-10 01:13:54.227053 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:13:54.227060 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:13:54.227067 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:13:54.227074 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:13:54.227080 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:13:54.227087 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:13:54.227094 | orchestrator | 2026-03-10 01:13:54.227101 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-10 01:13:54.227107 | orchestrator | Tuesday 10 March 2026 01:07:01 +0000 (0:00:01.130) 0:00:45.033 ********* 2026-03-10 01:13:54.227114 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:13:54.227121 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:13:54.227127 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:13:54.227157 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:13:54.227164 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:13:54.227170 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:13:54.227176 | orchestrator | 2026-03-10 01:13:54.227181 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-10 01:13:54.227188 | orchestrator | Tuesday 10 March 2026 01:07:04 +0000 (0:00:02.667) 0:00:47.701 ********* 2026-03-10 01:13:54.227199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 01:13:54.227226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 01:13:54.227233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 01:13:54.227247 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-10 01:13:54.227255 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-10 01:13:54.227262 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-10 01:13:54.227268 | orchestrator | 2026-03-10 01:13:54.227273 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-10 01:13:54.227279 | orchestrator | Tuesday 10 March 2026 01:07:08 +0000 (0:00:03.970) 0:00:51.671 ********* 2026-03-10 01:13:54.227285 | orchestrator | [WARNING]: Skipped 2026-03-10 01:13:54.227291 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-10 01:13:54.227298 | orchestrator | due to this access issue: 2026-03-10 01:13:54.227305 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-10 01:13:54.227311 | orchestrator | a directory 2026-03-10 01:13:54.227318 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-10 01:13:54.227324 | orchestrator | 2026-03-10 01:13:54.227335 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-10 01:13:54.227342 | orchestrator | Tuesday 10 March 2026 01:07:10 +0000 (0:00:01.927) 0:00:53.598 ********* 2026-03-10 01:13:54.227356 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:13:54.227364 | orchestrator | 2026-03-10 01:13:54.227371 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-10 01:13:54.227377 | orchestrator | Tuesday 10 March 2026 01:07:11 +0000 (0:00:01.615) 0:00:55.213 ********* 2026-03-10 01:13:54.227384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 01:13:54.227400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 01:13:54.227406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 01:13:54.227412 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-10 01:13:54.227428 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-10 01:13:54.227441 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-10 01:13:54.227448 | orchestrator | 2026-03-10 01:13:54.227455 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-10 01:13:54.227462 | orchestrator | Tuesday 10 March 2026 01:07:16 +0000 (0:00:04.677) 0:00:59.891 ********* 2026-03-10 01:13:54.227468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:13:54.227474 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:13:54.227481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:13:54.227487 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:13:54.227502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:13:54.227509 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:13:54.227521 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:13:54.227528 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:13:54.227536 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:13:54.227543 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:13:54.227550 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:13:54.227557 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:13:54.227564 | orchestrator | 2026-03-10 01:13:54.227571 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-10 01:13:54.227577 | orchestrator | Tuesday 10 March 2026 01:07:22 +0000 (0:00:05.408) 0:01:05.300 ********* 2026-03-10 01:13:54.227584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:13:54.227591 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:13:54.227608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:13:54.227628 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:13:54.227635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:13:54.227641 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:13:54.227648 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:13:54.227655 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:13:54.227662 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:13:54.227669 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:13:54.227676 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:13:54.227692 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:13:54.227698 | orchestrator | 2026-03-10 01:13:54.227705 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-10 01:13:54.227716 | orchestrator | Tuesday 10 March 2026 01:07:25 +0000 (0:00:03.834) 0:01:09.134 ********* 2026-03-10 01:13:54.227723 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:13:54.227730 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:13:54.227737 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:13:54.227743 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:13:54.227754 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:13:54.227761 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:13:54.227767 | orchestrator | 2026-03-10 01:13:54.227774 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-10 01:13:54.227781 | orchestrator | Tuesday 10 March 2026 01:07:30 +0000 (0:00:04.510) 0:01:13.644 ********* 2026-03-10 01:13:54.227787 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:13:54.227794 | orchestrator | 2026-03-10 01:13:54.227801 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-10 01:13:54.227808 | orchestrator | Tuesday 10 March 2026 01:07:30 +0000 (0:00:00.338) 0:01:13.983 ********* 2026-03-10 01:13:54.227814 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:13:54.227821 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:13:54.227828 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:13:54.227835 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:13:54.227841 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:13:54.227848 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:13:54.227854 | orchestrator | 2026-03-10 01:13:54.227861 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-10 01:13:54.227868 | orchestrator | Tuesday 10 March 2026 01:07:32 +0000 (0:00:01.907) 0:01:15.890 ********* 2026-03-10 01:13:54.227875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:13:54.227882 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:13:54.227890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:13:54.227897 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:13:54.227910 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:13:54.227917 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:13:54.228347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:13:54.228370 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:13:54.228378 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:13:54.228385 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:13:54.228393 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:13:54.228400 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:13:54.228407 | orchestrator | 2026-03-10 01:13:54.228414 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-10 01:13:54.228421 | orchestrator | Tuesday 10 March 2026 01:07:37 +0000 (0:00:05.220) 0:01:21.111 ********* 2026-03-10 01:13:54.228428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 01:13:54.228444 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-10 01:13:54.228462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 01:13:54.228470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 01:13:54.228477 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-10 01:13:54.228490 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-10 01:13:54.228497 | orchestrator | 2026-03-10 01:13:54.228504 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-10 01:13:54.228511 | orchestrator | Tuesday 10 March 2026 01:07:44 +0000 (0:00:06.582) 0:01:27.693 ********* 2026-03-10 01:13:54.228526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 01:13:54.228533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 01:13:54.228541 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-10 01:13:54.228548 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-10 01:13:54.228560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 01:13:54.228575 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-10 01:13:54.228582 | orchestrator | 2026-03-10 01:13:54.228588 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-10 01:13:54.228620 | orchestrator | Tuesday 10 March 2026 01:07:53 +0000 (0:00:09.102) 0:01:36.796 ********* 2026-03-10 01:13:54.228628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:13:54.228635 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:13:54.228643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:13:54.228655 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:13:54.228663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:13:54.228670 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:13:54.228677 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:13:54.228685 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:13:54.228701 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:13:54.228709 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:13:54.228717 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:13:54.228724 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:13:54.228731 | orchestrator | 2026-03-10 01:13:54.228739 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-10 01:13:54.228753 | orchestrator | Tuesday 10 March 2026 01:07:57 +0000 (0:00:03.616) 0:01:40.412 ********* 2026-03-10 01:13:54.228760 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:13:54.228768 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:13:54.228775 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:13:54.228781 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:13:54.228788 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:13:54.228795 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:13:54.228801 | orchestrator | 2026-03-10 01:13:54.228809 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-10 01:13:54.228816 | orchestrator | Tuesday 10 March 2026 01:08:02 +0000 (0:00:04.880) 0:01:45.293 ********* 2026-03-10 01:13:54.228824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:13:54.228831 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:13:54.228839 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:13:54.228846 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:13:54.228864 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:13:54.228872 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:13:54.228880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 01:13:54.228897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 01:13:54.228905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 01:13:54.228912 | orchestrator | 2026-03-10 01:13:54.228919 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-03-10 01:13:54.228927 | orchestrator | Tuesday 10 March 2026 01:08:07 +0000 (0:00:05.156) 0:01:50.450 ********* 2026-03-10 01:13:54.228935 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:13:54.228943 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:13:54.228950 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:13:54.228958 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:13:54.228966 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:13:54.228974 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:13:54.228982 | orchestrator | 2026-03-10 01:13:54.228990 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-10 01:13:54.228998 | orchestrator | Tuesday 10 March 2026 01:08:11 +0000 (0:00:04.189) 0:01:54.640 ********* 2026-03-10 01:13:54.229006 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:13:54.229014 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:13:54.229022 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:13:54.229030 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:13:54.229037 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:13:54.229045 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:13:54.229053 | orchestrator | 2026-03-10 01:13:54.229061 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-10 01:13:54.229069 | orchestrator | Tuesday 10 March 2026 01:08:16 +0000 (0:00:04.709) 0:01:59.349 ********* 2026-03-10 01:13:54.229080 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:13:54.229088 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:13:54.229096 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:13:54.229105 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:13:54.229113 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:13:54.229121 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:13:54.229130 | orchestrator | 2026-03-10 01:13:54.229197 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-10 01:13:54.229212 | orchestrator | Tuesday 10 March 2026 01:08:19 +0000 (0:00:03.007) 0:02:02.356 ********* 2026-03-10 01:13:54.229220 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:13:54.229227 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:13:54.229232 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:13:54.229238 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:13:54.229244 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:13:54.229250 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:13:54.229257 | orchestrator | 2026-03-10 01:13:54.229264 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-10 01:13:54.229270 | orchestrator | Tuesday 10 March 2026 01:08:22 +0000 (0:00:03.495) 0:02:05.852 ********* 2026-03-10 01:13:54.229276 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:13:54.229282 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:13:54.229288 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:13:54.229294 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:13:54.229301 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:13:54.229308 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:13:54.229315 | orchestrator | 2026-03-10 01:13:54.229322 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-10 01:13:54.229328 | orchestrator | Tuesday 10 March 2026 01:08:25 +0000 (0:00:03.299) 0:02:09.152 ********* 2026-03-10 01:13:54.229335 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:13:54.229342 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:13:54.229349 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:13:54.229356 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:13:54.229362 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:13:54.229369 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:13:54.229376 | orchestrator | 2026-03-10 01:13:54.229383 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-10 01:13:54.229390 | orchestrator | Tuesday 10 March 2026 01:08:30 +0000 (0:00:04.409) 0:02:13.561 ********* 2026-03-10 01:13:54.229396 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-10 01:13:54.229403 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:13:54.229410 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-10 01:13:54.229417 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:13:54.229424 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-10 01:13:54.229431 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:13:54.229437 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-10 01:13:54.229444 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:13:54.229452 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-10 01:13:54.229458 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:13:54.229465 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-10 01:13:54.229472 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:13:54.229479 | orchestrator | 2026-03-10 01:13:54.229486 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-10 01:13:54.229492 | orchestrator | Tuesday 10 March 2026 01:08:34 +0000 (0:00:03.849) 0:02:17.410 ********* 2026-03-10 01:13:54.229500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:13:54.229512 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:13:54.229530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:13:54.229537 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:13:54.229544 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:13:54.229551 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:13:54.229558 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:13:54.229565 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:13:54.229571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:13:54.229585 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:13:54.229591 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:13:54.229599 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:13:54.229605 | orchestrator | 2026-03-10 01:13:54.229612 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-10 01:13:54.229619 | orchestrator | Tuesday 10 March 2026 01:08:37 +0000 (0:00:03.451) 0:02:20.861 ********* 2026-03-10 01:13:54.229635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:13:54.229643 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:13:54.229650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:13:54.229657 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:13:54.229663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:13:54.229675 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:13:54.229682 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:13:54.229689 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:13:54.229700 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:13:54.229708 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:13:54.229719 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:13:54.229727 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:13:54.229734 | orchestrator | 2026-03-10 01:13:54.229741 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-10 01:13:54.229747 | orchestrator | Tuesday 10 March 2026 01:08:40 +0000 (0:00:02.861) 0:02:23.722 ********* 2026-03-10 01:13:54.229754 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:13:54.229760 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:13:54.229766 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:13:54.229773 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:13:54.229779 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:13:54.229786 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:13:54.229793 | orchestrator | 2026-03-10 01:13:54.229800 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-10 01:13:54.229807 | orchestrator | Tuesday 10 March 2026 01:08:44 +0000 (0:00:03.709) 0:02:27.432 ********* 2026-03-10 01:13:54.229813 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:13:54.229820 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:13:54.229827 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:13:54.229834 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:13:54.229840 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:13:54.229847 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:13:54.229854 | orchestrator | 2026-03-10 01:13:54.229861 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-10 01:13:54.229873 | orchestrator | Tuesday 10 March 2026 01:08:51 +0000 (0:00:07.101) 0:02:34.533 ********* 2026-03-10 01:13:54.229880 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:13:54.229886 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:13:54.229893 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:13:54.229900 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:13:54.229907 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:13:54.229913 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:13:54.229920 | orchestrator | 2026-03-10 01:13:54.229927 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-10 01:13:54.229934 | orchestrator | Tuesday 10 March 2026 01:08:56 +0000 (0:00:05.313) 0:02:39.847 ********* 2026-03-10 01:13:54.229941 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:13:54.229948 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:13:54.229954 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:13:54.229961 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:13:54.229967 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:13:54.229974 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:13:54.229981 | orchestrator | 2026-03-10 01:13:54.229987 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-10 01:13:54.229994 | orchestrator | Tuesday 10 March 2026 01:09:00 +0000 (0:00:04.393) 0:02:44.241 ********* 2026-03-10 01:13:54.230001 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:13:54.230007 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:13:54.230069 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:13:54.230078 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:13:54.230086 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:13:54.230093 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:13:54.230100 | orchestrator | 2026-03-10 01:13:54.230108 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-10 01:13:54.230115 | orchestrator | Tuesday 10 March 2026 01:09:03 +0000 (0:00:02.359) 0:02:46.600 ********* 2026-03-10 01:13:54.230123 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:13:54.230129 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:13:54.230154 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:13:54.230160 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:13:54.230167 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:13:54.230174 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:13:54.230181 | orchestrator | 2026-03-10 01:13:54.230188 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-10 01:13:54.230194 | orchestrator | Tuesday 10 March 2026 01:09:06 +0000 (0:00:02.854) 0:02:49.454 ********* 2026-03-10 01:13:54.230201 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:13:54.230208 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:13:54.230215 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:13:54.230222 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:13:54.230227 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:13:54.230234 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:13:54.230240 | orchestrator | 2026-03-10 01:13:54.230246 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-10 01:13:54.230255 | orchestrator | Tuesday 10 March 2026 01:09:08 +0000 (0:00:02.494) 0:02:51.949 ********* 2026-03-10 01:13:54.230263 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:13:54.230270 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:13:54.230277 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:13:54.230284 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:13:54.230291 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:13:54.230299 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:13:54.230306 | orchestrator | 2026-03-10 01:13:54.230313 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-10 01:13:54.230328 | orchestrator | Tuesday 10 March 2026 01:09:13 +0000 (0:00:04.841) 0:02:56.790 ********* 2026-03-10 01:13:54.230335 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:13:54.230349 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:13:54.230357 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:13:54.230369 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:13:54.230377 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:13:54.230384 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:13:54.230392 | orchestrator | 2026-03-10 01:13:54.230399 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-10 01:13:54.230406 | orchestrator | Tuesday 10 March 2026 01:09:17 +0000 (0:00:04.205) 0:03:00.996 ********* 2026-03-10 01:13:54.230414 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-10 01:13:54.230423 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:13:54.230430 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-10 01:13:54.230437 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:13:54.230445 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-10 01:13:54.230452 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:13:54.230459 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-10 01:13:54.230467 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:13:54.230473 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-10 01:13:54.230480 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:13:54.230488 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-10 01:13:54.230494 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:13:54.230501 | orchestrator | 2026-03-10 01:13:54.230508 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-10 01:13:54.230515 | orchestrator | Tuesday 10 March 2026 01:09:21 +0000 (0:00:03.311) 0:03:04.307 ********* 2026-03-10 01:13:54.230523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:13:54.230531 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:13:54.230538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:13:54.230546 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:13:54.230565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:13:54.230578 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:13:54.230585 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:13:54.230592 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:13:54.230600 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:13:54.230607 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:13:54.230615 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:13:54.230622 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:13:54.230630 | orchestrator | 2026-03-10 01:13:54.230637 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-03-10 01:13:54.230644 | orchestrator | Tuesday 10 March 2026 01:09:24 +0000 (0:00:03.305) 0:03:07.612 ********* 2026-03-10 01:13:54.230651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 01:13:54.230677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 01:13:54.230686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 01:13:54.230693 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-10 01:13:54.230701 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-10 01:13:54.230714 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-10 01:13:54.230721 | orchestrator | 2026-03-10 01:13:54.230728 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-10 01:13:54.230735 | orchestrator | Tuesday 10 March 2026 01:09:30 +0000 (0:00:06.073) 0:03:13.686 ********* 2026-03-10 01:13:54.230743 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:13:54.230750 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:13:54.230757 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:13:54.230765 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:13:54.230772 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:13:54.230784 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:13:54.230791 | orchestrator | 2026-03-10 01:13:54.230799 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-10 01:13:54.230806 | orchestrator | Tuesday 10 March 2026 01:09:31 +0000 (0:00:00.745) 0:03:14.431 ********* 2026-03-10 01:13:54.230817 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:13:54.230824 | orchestrator | 2026-03-10 01:13:54.230831 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-10 01:13:54.230839 | orchestrator | Tuesday 10 March 2026 01:09:33 +0000 (0:00:02.252) 0:03:16.684 ********* 2026-03-10 01:13:54.230846 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:13:54.230853 | orchestrator | 2026-03-10 01:13:54.230860 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-10 01:13:54.230867 | orchestrator | Tuesday 10 March 2026 01:09:35 +0000 (0:00:02.473) 0:03:19.158 ********* 2026-03-10 01:13:54.230875 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:13:54.230882 | orchestrator | 2026-03-10 01:13:54.230890 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-10 01:13:54.230897 | orchestrator | Tuesday 10 March 2026 01:10:23 +0000 (0:00:47.741) 0:04:06.899 ********* 2026-03-10 01:13:54.230904 | orchestrator | 2026-03-10 01:13:54.230911 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-10 01:13:54.230918 | orchestrator | Tuesday 10 March 2026 01:10:23 +0000 (0:00:00.106) 0:04:07.006 ********* 2026-03-10 01:13:54.230925 | orchestrator | 2026-03-10 01:13:54.230932 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-10 01:13:54.230939 | orchestrator | Tuesday 10 March 2026 01:10:24 +0000 (0:00:00.310) 0:04:07.316 ********* 2026-03-10 01:13:54.230946 | orchestrator | 2026-03-10 01:13:54.230954 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-10 01:13:54.230961 | orchestrator | Tuesday 10 March 2026 01:10:24 +0000 (0:00:00.073) 0:04:07.389 ********* 2026-03-10 01:13:54.230968 | orchestrator | 2026-03-10 01:13:54.230975 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-10 01:13:54.230982 | orchestrator | Tuesday 10 March 2026 01:10:24 +0000 (0:00:00.076) 0:04:07.466 ********* 2026-03-10 01:13:54.230989 | orchestrator | 2026-03-10 01:13:54.230996 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-10 01:13:54.231003 | orchestrator | Tuesday 10 March 2026 01:10:24 +0000 (0:00:00.077) 0:04:07.544 ********* 2026-03-10 01:13:54.231011 | orchestrator | 2026-03-10 01:13:54.231018 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-10 01:13:54.231026 | orchestrator | Tuesday 10 March 2026 01:10:24 +0000 (0:00:00.070) 0:04:07.615 ********* 2026-03-10 01:13:54.231038 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:13:54.231046 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:13:54.231053 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:13:54.231060 | orchestrator | 2026-03-10 01:13:54.231067 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-10 01:13:54.231074 | orchestrator | Tuesday 10 March 2026 01:10:52 +0000 (0:00:28.482) 0:04:36.098 ********* 2026-03-10 01:13:54.231081 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:13:54.231088 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:13:54.231096 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:13:54.231103 | orchestrator | 2026-03-10 01:13:54.231110 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:13:54.231119 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-10 01:13:54.231128 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-10 01:13:54.231157 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-10 01:13:54.231165 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-10 01:13:54.231172 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-10 01:13:54.231179 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-10 01:13:54.231186 | orchestrator | 2026-03-10 01:13:54.231193 | orchestrator | 2026-03-10 01:13:54.231200 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:13:54.231207 | orchestrator | Tuesday 10 March 2026 01:11:55 +0000 (0:01:02.213) 0:05:38.312 ********* 2026-03-10 01:13:54.231213 | orchestrator | =============================================================================== 2026-03-10 01:13:54.231220 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 62.21s 2026-03-10 01:13:54.231226 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 47.74s 2026-03-10 01:13:54.231231 | orchestrator | neutron : Restart neutron-server container ----------------------------- 28.48s 2026-03-10 01:13:54.231237 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 9.10s 2026-03-10 01:13:54.231244 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.57s 2026-03-10 01:13:54.231251 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.33s 2026-03-10 01:13:54.231257 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 7.10s 2026-03-10 01:13:54.231263 | orchestrator | neutron : Copying over config.json files for services ------------------- 6.58s 2026-03-10 01:13:54.231273 | orchestrator | neutron : Check neutron containers -------------------------------------- 6.07s 2026-03-10 01:13:54.231279 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 5.41s 2026-03-10 01:13:54.231290 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 5.31s 2026-03-10 01:13:54.231296 | orchestrator | neutron : Copying over existing policy file ----------------------------- 5.22s 2026-03-10 01:13:54.231301 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 5.16s 2026-03-10 01:13:54.231306 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 4.88s 2026-03-10 01:13:54.231312 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 4.84s 2026-03-10 01:13:54.231317 | orchestrator | neutron : Copying over openvswitch_agent.ini ---------------------------- 4.71s 2026-03-10 01:13:54.231329 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.68s 2026-03-10 01:13:54.231336 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 4.51s 2026-03-10 01:13:54.231343 | orchestrator | neutron : Copying over dhcp_agent.ini ----------------------------------- 4.41s 2026-03-10 01:13:54.231350 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 4.39s 2026-03-10 01:13:54.231357 | orchestrator | 2026-03-10 01:13:54 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:13:57.268375 | orchestrator | 2026-03-10 01:13:57 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:13:57.270880 | orchestrator | 2026-03-10 01:13:57 | INFO  | Task efb2f095-8cf4-4dfd-853a-2084eae28fae is in state STARTED 2026-03-10 01:13:57.272425 | orchestrator | 2026-03-10 01:13:57 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:13:57.273818 | orchestrator | 2026-03-10 01:13:57 | INFO  | Task 6e429885-ba08-416a-a18c-8ad3bb4f3a6a is in state STARTED 2026-03-10 01:13:57.273856 | orchestrator | 2026-03-10 01:13:57 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:00.324326 | orchestrator | 2026-03-10 01:14:00 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:14:00.326270 | orchestrator | 2026-03-10 01:14:00 | INFO  | Task efb2f095-8cf4-4dfd-853a-2084eae28fae is in state STARTED 2026-03-10 01:14:00.327819 | orchestrator | 2026-03-10 01:14:00 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:14:00.331590 | orchestrator | 2026-03-10 01:14:00 | INFO  | Task 6e429885-ba08-416a-a18c-8ad3bb4f3a6a is in state STARTED 2026-03-10 01:14:00.331647 | orchestrator | 2026-03-10 01:14:00 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:03.368769 | orchestrator | 2026-03-10 01:14:03 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:14:03.370249 | orchestrator | 2026-03-10 01:14:03 | INFO  | Task efb2f095-8cf4-4dfd-853a-2084eae28fae is in state SUCCESS 2026-03-10 01:14:03.372326 | orchestrator | 2026-03-10 01:14:03.372376 | orchestrator | 2026-03-10 01:14:03.372388 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 01:14:03.372399 | orchestrator | 2026-03-10 01:14:03.372406 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 01:14:03.372415 | orchestrator | Tuesday 10 March 2026 01:11:48 +0000 (0:00:00.305) 0:00:00.305 ********* 2026-03-10 01:14:03.372425 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:14:03.372435 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:14:03.372444 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:14:03.372451 | orchestrator | 2026-03-10 01:14:03.372459 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 01:14:03.372468 | orchestrator | Tuesday 10 March 2026 01:11:49 +0000 (0:00:00.356) 0:00:00.661 ********* 2026-03-10 01:14:03.372477 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-10 01:14:03.372486 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-10 01:14:03.372494 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-10 01:14:03.372502 | orchestrator | 2026-03-10 01:14:03.372511 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-10 01:14:03.372519 | orchestrator | 2026-03-10 01:14:03.372528 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-10 01:14:03.372537 | orchestrator | Tuesday 10 March 2026 01:11:49 +0000 (0:00:00.486) 0:00:01.147 ********* 2026-03-10 01:14:03.372546 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:14:03.372556 | orchestrator | 2026-03-10 01:14:03.372565 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-03-10 01:14:03.372602 | orchestrator | Tuesday 10 March 2026 01:11:50 +0000 (0:00:00.625) 0:00:01.773 ********* 2026-03-10 01:14:03.372612 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-10 01:14:03.372621 | orchestrator | 2026-03-10 01:14:03.372630 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-03-10 01:14:03.372639 | orchestrator | Tuesday 10 March 2026 01:11:54 +0000 (0:00:04.518) 0:00:06.291 ********* 2026-03-10 01:14:03.372646 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-10 01:14:03.372654 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-10 01:14:03.372662 | orchestrator | 2026-03-10 01:14:03.372670 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-10 01:14:03.372678 | orchestrator | Tuesday 10 March 2026 01:12:02 +0000 (0:00:08.247) 0:00:14.539 ********* 2026-03-10 01:14:03.372701 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-10 01:14:03.372708 | orchestrator | 2026-03-10 01:14:03.372715 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-10 01:14:03.372722 | orchestrator | Tuesday 10 March 2026 01:12:06 +0000 (0:00:03.829) 0:00:18.369 ********* 2026-03-10 01:14:03.372730 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-10 01:14:03.372738 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-10 01:14:03.372747 | orchestrator | 2026-03-10 01:14:03.372755 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-10 01:14:03.372762 | orchestrator | Tuesday 10 March 2026 01:12:11 +0000 (0:00:04.267) 0:00:22.637 ********* 2026-03-10 01:14:03.372772 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-10 01:14:03.372781 | orchestrator | 2026-03-10 01:14:03.372789 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-03-10 01:14:03.372797 | orchestrator | Tuesday 10 March 2026 01:12:14 +0000 (0:00:03.853) 0:00:26.490 ********* 2026-03-10 01:14:03.372805 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-10 01:14:03.372814 | orchestrator | 2026-03-10 01:14:03.372823 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-10 01:14:03.372831 | orchestrator | Tuesday 10 March 2026 01:12:19 +0000 (0:00:04.493) 0:00:30.983 ********* 2026-03-10 01:14:03.372839 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:03.372848 | orchestrator | 2026-03-10 01:14:03.372857 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-10 01:14:03.372864 | orchestrator | Tuesday 10 March 2026 01:12:23 +0000 (0:00:03.764) 0:00:34.748 ********* 2026-03-10 01:14:03.372873 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:03.372881 | orchestrator | 2026-03-10 01:14:03.372890 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-10 01:14:03.372898 | orchestrator | Tuesday 10 March 2026 01:12:27 +0000 (0:00:04.457) 0:00:39.205 ********* 2026-03-10 01:14:03.372906 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:03.372914 | orchestrator | 2026-03-10 01:14:03.372923 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-10 01:14:03.372931 | orchestrator | Tuesday 10 March 2026 01:12:31 +0000 (0:00:03.801) 0:00:43.007 ********* 2026-03-10 01:14:03.372961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 01:14:03.372987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 01:14:03.373001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 01:14:03.373011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:03.373022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:03.373039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:03.373053 | orchestrator | 2026-03-10 01:14:03.373062 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-10 01:14:03.373070 | orchestrator | Tuesday 10 March 2026 01:12:33 +0000 (0:00:01.580) 0:00:44.588 ********* 2026-03-10 01:14:03.373078 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:03.373086 | orchestrator | 2026-03-10 01:14:03.373096 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-10 01:14:03.373104 | orchestrator | Tuesday 10 March 2026 01:12:33 +0000 (0:00:00.152) 0:00:44.741 ********* 2026-03-10 01:14:03.373112 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:03.373120 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:03.373160 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:03.373170 | orchestrator | 2026-03-10 01:14:03.373178 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-10 01:14:03.373187 | orchestrator | Tuesday 10 March 2026 01:12:33 +0000 (0:00:00.561) 0:00:45.302 ********* 2026-03-10 01:14:03.373197 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-10 01:14:03.373205 | orchestrator | 2026-03-10 01:14:03.373213 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-10 01:14:03.373221 | orchestrator | Tuesday 10 March 2026 01:12:34 +0000 (0:00:01.016) 0:00:46.318 ********* 2026-03-10 01:14:03.373234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 01:14:03.373243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 01:14:03.373252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 01:14:03.373275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:03.373285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:03.373297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:03.373307 | orchestrator | 2026-03-10 01:14:03.373315 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-10 01:14:03.373323 | orchestrator | Tuesday 10 March 2026 01:12:37 +0000 (0:00:03.010) 0:00:49.329 ********* 2026-03-10 01:14:03.373332 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:14:03.373340 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:14:03.373348 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:14:03.373357 | orchestrator | 2026-03-10 01:14:03.373365 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-10 01:14:03.373374 | orchestrator | Tuesday 10 March 2026 01:12:38 +0000 (0:00:00.542) 0:00:49.872 ********* 2026-03-10 01:14:03.373383 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:14:03.373391 | orchestrator | 2026-03-10 01:14:03.373400 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-10 01:14:03.373408 | orchestrator | Tuesday 10 March 2026 01:12:39 +0000 (0:00:01.035) 0:00:50.908 ********* 2026-03-10 01:14:03.373417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 01:14:03.373439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 01:14:03.373448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 01:14:03.373460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:03.373469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:03.373485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:03.373494 | orchestrator | 2026-03-10 01:14:03.373502 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-10 01:14:03.373510 | orchestrator | Tuesday 10 March 2026 01:12:42 +0000 (0:00:03.009) 0:00:53.917 ********* 2026-03-10 01:14:03.373522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-10 01:14:03.373531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-10 01:14:03.373540 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:03.373548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-10 01:14:03.373554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-10 01:14:03.373563 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:03.373568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-10 01:14:03.373577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-10 01:14:03.373585 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:03.373593 | orchestrator | 2026-03-10 01:14:03.373601 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-10 01:14:03.373608 | orchestrator | Tuesday 10 March 2026 01:12:43 +0000 (0:00:00.849) 0:00:54.766 ********* 2026-03-10 01:14:03.373616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-10 01:14:03.373629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-10 01:14:03.373652 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:03.373661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-10 01:14:03.373674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-10 01:14:03.373683 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:03.373692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-10 01:14:03.373701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-10 01:14:03.373710 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:03.373719 | orchestrator | 2026-03-10 01:14:03.373732 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-10 01:14:03.373748 | orchestrator | Tuesday 10 March 2026 01:12:44 +0000 (0:00:01.630) 0:00:56.397 ********* 2026-03-10 01:14:03.373758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 01:14:03.373767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 01:14:03.374052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 01:14:03.374076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:03.374103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:03.374144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:03.374154 | orchestrator | 2026-03-10 01:14:03.374163 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-10 01:14:03.374171 | orchestrator | Tuesday 10 March 2026 01:12:47 +0000 (0:00:02.721) 0:00:59.118 ********* 2026-03-10 01:14:03.374180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 01:14:03.374197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 01:14:03.374208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 01:14:03.374228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:03.374237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:03.374246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:03.374254 | orchestrator | 2026-03-10 01:14:03.374263 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-10 01:14:03.374276 | orchestrator | Tuesday 10 March 2026 01:12:53 +0000 (0:00:05.739) 0:01:04.858 ********* 2026-03-10 01:14:03.374282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-10 01:14:03.374287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-10 01:14:03.374297 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:03.374306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-10 01:14:03.374311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-10 01:14:03.374316 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:03.374327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-10 01:14:03.374335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-10 01:14:03.374343 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:03.374351 | orchestrator | 2026-03-10 01:14:03.374365 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-03-10 01:14:03.374374 | orchestrator | Tuesday 10 March 2026 01:12:54 +0000 (0:00:00.774) 0:01:05.632 ********* 2026-03-10 01:14:03.374386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 01:14:03.374395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 01:14:03.374404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 01:14:03.374417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:03.374425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:03.374445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:03.374455 | orchestrator | 2026-03-10 01:14:03.374463 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-10 01:14:03.374471 | orchestrator | Tuesday 10 March 2026 01:12:56 +0000 (0:00:02.738) 0:01:08.371 ********* 2026-03-10 01:14:03.374479 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:03.374487 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:03.374495 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:03.374503 | orchestrator | 2026-03-10 01:14:03.374511 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-10 01:14:03.374519 | orchestrator | Tuesday 10 March 2026 01:12:57 +0000 (0:00:00.341) 0:01:08.713 ********* 2026-03-10 01:14:03.374527 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:03.374536 | orchestrator | 2026-03-10 01:14:03.374545 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-10 01:14:03.374554 | orchestrator | Tuesday 10 March 2026 01:12:59 +0000 (0:00:02.628) 0:01:11.341 ********* 2026-03-10 01:14:03.374562 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:03.374571 | orchestrator | 2026-03-10 01:14:03.374578 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-10 01:14:03.374586 | orchestrator | Tuesday 10 March 2026 01:13:03 +0000 (0:00:03.413) 0:01:14.755 ********* 2026-03-10 01:14:03.374595 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:03.374604 | orchestrator | 2026-03-10 01:14:03.374612 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-10 01:14:03.374619 | orchestrator | Tuesday 10 March 2026 01:13:24 +0000 (0:00:21.356) 0:01:36.111 ********* 2026-03-10 01:14:03.374627 | orchestrator | 2026-03-10 01:14:03.374636 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-10 01:14:03.374644 | orchestrator | Tuesday 10 March 2026 01:13:24 +0000 (0:00:00.149) 0:01:36.261 ********* 2026-03-10 01:14:03.374652 | orchestrator | 2026-03-10 01:14:03.374660 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-10 01:14:03.374669 | orchestrator | Tuesday 10 March 2026 01:13:24 +0000 (0:00:00.143) 0:01:36.405 ********* 2026-03-10 01:14:03.374678 | orchestrator | 2026-03-10 01:14:03.374687 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-10 01:14:03.374696 | orchestrator | Tuesday 10 March 2026 01:13:25 +0000 (0:00:00.220) 0:01:36.626 ********* 2026-03-10 01:14:03.374705 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:03.374715 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:14:03.374724 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:14:03.374733 | orchestrator | 2026-03-10 01:14:03.374741 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-10 01:14:03.374750 | orchestrator | Tuesday 10 March 2026 01:13:45 +0000 (0:00:20.385) 0:01:57.012 ********* 2026-03-10 01:14:03.374766 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:03.374783 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:14:03.374791 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:14:03.374798 | orchestrator | 2026-03-10 01:14:03.374811 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:14:03.374820 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-10 01:14:03.374829 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-10 01:14:03.374837 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-10 01:14:03.374845 | orchestrator | 2026-03-10 01:14:03.374852 | orchestrator | 2026-03-10 01:14:03.374860 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:14:03.374867 | orchestrator | Tuesday 10 March 2026 01:14:01 +0000 (0:00:15.683) 0:02:12.696 ********* 2026-03-10 01:14:03.374875 | orchestrator | =============================================================================== 2026-03-10 01:14:03.374883 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 21.36s 2026-03-10 01:14:03.374891 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 20.39s 2026-03-10 01:14:03.374898 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 15.68s 2026-03-10 01:14:03.374905 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 8.25s 2026-03-10 01:14:03.374912 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.74s 2026-03-10 01:14:03.374919 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 4.52s 2026-03-10 01:14:03.374926 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.49s 2026-03-10 01:14:03.374932 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.46s 2026-03-10 01:14:03.374940 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.27s 2026-03-10 01:14:03.374947 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.85s 2026-03-10 01:14:03.374955 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.83s 2026-03-10 01:14:03.374963 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.80s 2026-03-10 01:14:03.374971 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.76s 2026-03-10 01:14:03.374984 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 3.41s 2026-03-10 01:14:03.374992 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 3.01s 2026-03-10 01:14:03.375001 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 3.01s 2026-03-10 01:14:03.375009 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.74s 2026-03-10 01:14:03.375017 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.72s 2026-03-10 01:14:03.375025 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.63s 2026-03-10 01:14:03.375034 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 1.63s 2026-03-10 01:14:03.375042 | orchestrator | 2026-03-10 01:14:03 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:14:03.375051 | orchestrator | 2026-03-10 01:14:03 | INFO  | Task 81530758-9fd1-476b-806d-0b4561c6261e is in state STARTED 2026-03-10 01:14:03.375247 | orchestrator | 2026-03-10 01:14:03 | INFO  | Task 6e429885-ba08-416a-a18c-8ad3bb4f3a6a is in state STARTED 2026-03-10 01:14:03.375264 | orchestrator | 2026-03-10 01:14:03 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:06.426367 | orchestrator | 2026-03-10 01:14:06 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:14:06.428263 | orchestrator | 2026-03-10 01:14:06 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:14:06.430352 | orchestrator | 2026-03-10 01:14:06 | INFO  | Task 81530758-9fd1-476b-806d-0b4561c6261e is in state STARTED 2026-03-10 01:14:06.432232 | orchestrator | 2026-03-10 01:14:06 | INFO  | Task 6e429885-ba08-416a-a18c-8ad3bb4f3a6a is in state STARTED 2026-03-10 01:14:06.432300 | orchestrator | 2026-03-10 01:14:06 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:09.487268 | orchestrator | 2026-03-10 01:14:09 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:14:09.489370 | orchestrator | 2026-03-10 01:14:09 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:14:09.490467 | orchestrator | 2026-03-10 01:14:09 | INFO  | Task 81530758-9fd1-476b-806d-0b4561c6261e is in state SUCCESS 2026-03-10 01:14:09.492339 | orchestrator | 2026-03-10 01:14:09 | INFO  | Task 6e429885-ba08-416a-a18c-8ad3bb4f3a6a is in state STARTED 2026-03-10 01:14:09.492585 | orchestrator | 2026-03-10 01:14:09 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:12.545762 | orchestrator | 2026-03-10 01:14:12 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:14:12.549280 | orchestrator | 2026-03-10 01:14:12 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:14:12.552738 | orchestrator | 2026-03-10 01:14:12 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:14:12.555784 | orchestrator | 2026-03-10 01:14:12 | INFO  | Task 6e429885-ba08-416a-a18c-8ad3bb4f3a6a is in state STARTED 2026-03-10 01:14:12.555887 | orchestrator | 2026-03-10 01:14:12 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:15.608016 | orchestrator | 2026-03-10 01:14:15 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:14:15.610438 | orchestrator | 2026-03-10 01:14:15 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:14:15.613994 | orchestrator | 2026-03-10 01:14:15 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:14:15.615360 | orchestrator | 2026-03-10 01:14:15 | INFO  | Task 6e429885-ba08-416a-a18c-8ad3bb4f3a6a is in state STARTED 2026-03-10 01:14:15.616035 | orchestrator | 2026-03-10 01:14:15 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:18.664792 | orchestrator | 2026-03-10 01:14:18 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state STARTED 2026-03-10 01:14:18.665695 | orchestrator | 2026-03-10 01:14:18 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:14:18.666614 | orchestrator | 2026-03-10 01:14:18 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:14:18.667998 | orchestrator | 2026-03-10 01:14:18 | INFO  | Task 6e429885-ba08-416a-a18c-8ad3bb4f3a6a is in state STARTED 2026-03-10 01:14:18.668029 | orchestrator | 2026-03-10 01:14:18 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:21.720157 | orchestrator | 2026-03-10 01:14:21 | INFO  | Task f6e580e3-745d-4d9c-b798-0d7a66ec5a8e is in state SUCCESS 2026-03-10 01:14:21.722980 | orchestrator | 2026-03-10 01:14:21.723012 | orchestrator | 2026-03-10 01:14:21.723020 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 01:14:21.723028 | orchestrator | 2026-03-10 01:14:21.723054 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 01:14:21.723062 | orchestrator | Tuesday 10 March 2026 01:14:06 +0000 (0:00:00.201) 0:00:00.201 ********* 2026-03-10 01:14:21.723094 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:14:21.723103 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:14:21.723130 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:14:21.723136 | orchestrator | 2026-03-10 01:14:21.723142 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 01:14:21.723149 | orchestrator | Tuesday 10 March 2026 01:14:06 +0000 (0:00:00.325) 0:00:00.527 ********* 2026-03-10 01:14:21.723155 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-03-10 01:14:21.723162 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-03-10 01:14:21.723168 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-03-10 01:14:21.723173 | orchestrator | 2026-03-10 01:14:21.723179 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-03-10 01:14:21.723185 | orchestrator | 2026-03-10 01:14:21.723191 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-03-10 01:14:21.723197 | orchestrator | Tuesday 10 March 2026 01:14:07 +0000 (0:00:00.702) 0:00:01.230 ********* 2026-03-10 01:14:21.723202 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:14:21.723208 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:14:21.723214 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:14:21.723220 | orchestrator | 2026-03-10 01:14:21.723226 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:14:21.723233 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:14:21.723242 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:14:21.723248 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:14:21.723254 | orchestrator | 2026-03-10 01:14:21.723259 | orchestrator | 2026-03-10 01:14:21.723265 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:14:21.723271 | orchestrator | Tuesday 10 March 2026 01:14:08 +0000 (0:00:00.708) 0:00:01.939 ********* 2026-03-10 01:14:21.723277 | orchestrator | =============================================================================== 2026-03-10 01:14:21.723283 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.71s 2026-03-10 01:14:21.723289 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.70s 2026-03-10 01:14:21.723295 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-03-10 01:14:21.723301 | orchestrator | 2026-03-10 01:14:21.723307 | orchestrator | 2026-03-10 01:14:21.723312 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 01:14:21.723318 | orchestrator | 2026-03-10 01:14:21.723324 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-10 01:14:21.723330 | orchestrator | Tuesday 10 March 2026 01:03:26 +0000 (0:00:00.364) 0:00:00.364 ********* 2026-03-10 01:14:21.723335 | orchestrator | changed: [testbed-manager] 2026-03-10 01:14:21.723343 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:21.723349 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:14:21.723355 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:14:21.723361 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:14:21.723367 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:14:21.723372 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:14:21.723378 | orchestrator | 2026-03-10 01:14:21.723384 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 01:14:21.723390 | orchestrator | Tuesday 10 March 2026 01:03:27 +0000 (0:00:00.981) 0:00:01.345 ********* 2026-03-10 01:14:21.723396 | orchestrator | changed: [testbed-manager] 2026-03-10 01:14:21.723401 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:21.723407 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:14:21.723413 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:14:21.723425 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:14:21.723431 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:14:21.723436 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:14:21.723442 | orchestrator | 2026-03-10 01:14:21.723448 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 01:14:21.723454 | orchestrator | Tuesday 10 March 2026 01:03:28 +0000 (0:00:00.682) 0:00:02.028 ********* 2026-03-10 01:14:21.723459 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-10 01:14:21.723466 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-10 01:14:21.723472 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-10 01:14:21.723477 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-10 01:14:21.723483 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-10 01:14:21.723489 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-10 01:14:21.723495 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-10 01:14:21.723500 | orchestrator | 2026-03-10 01:14:21.723507 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-10 01:14:21.723512 | orchestrator | 2026-03-10 01:14:21.723518 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-10 01:14:21.723524 | orchestrator | Tuesday 10 March 2026 01:03:28 +0000 (0:00:00.985) 0:00:03.013 ********* 2026-03-10 01:14:21.723530 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:14:21.723536 | orchestrator | 2026-03-10 01:14:21.723542 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-10 01:14:21.723547 | orchestrator | Tuesday 10 March 2026 01:03:30 +0000 (0:00:01.076) 0:00:04.090 ********* 2026-03-10 01:14:21.723553 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-10 01:14:21.723570 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-10 01:14:21.723577 | orchestrator | 2026-03-10 01:14:21.723588 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-10 01:14:21.723595 | orchestrator | Tuesday 10 March 2026 01:03:34 +0000 (0:00:04.426) 0:00:08.516 ********* 2026-03-10 01:14:21.723602 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-10 01:14:21.723608 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-10 01:14:21.723614 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:21.723620 | orchestrator | 2026-03-10 01:14:21.723627 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-10 01:14:21.723633 | orchestrator | Tuesday 10 March 2026 01:03:39 +0000 (0:00:04.783) 0:00:13.300 ********* 2026-03-10 01:14:21.723639 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:21.723646 | orchestrator | 2026-03-10 01:14:21.723652 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-10 01:14:21.723659 | orchestrator | Tuesday 10 March 2026 01:03:40 +0000 (0:00:00.862) 0:00:14.162 ********* 2026-03-10 01:14:21.723665 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:21.723671 | orchestrator | 2026-03-10 01:14:21.723678 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-10 01:14:21.723684 | orchestrator | Tuesday 10 March 2026 01:03:41 +0000 (0:00:01.806) 0:00:15.968 ********* 2026-03-10 01:14:21.723690 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:21.723696 | orchestrator | 2026-03-10 01:14:21.723703 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-10 01:14:21.723709 | orchestrator | Tuesday 10 March 2026 01:03:45 +0000 (0:00:03.521) 0:00:19.490 ********* 2026-03-10 01:14:21.723715 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.723721 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.723728 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.723734 | orchestrator | 2026-03-10 01:14:21.723740 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-10 01:14:21.723746 | orchestrator | Tuesday 10 March 2026 01:03:46 +0000 (0:00:00.676) 0:00:20.167 ********* 2026-03-10 01:14:21.723758 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:14:21.723764 | orchestrator | 2026-03-10 01:14:21.723771 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-10 01:14:21.723777 | orchestrator | Tuesday 10 March 2026 01:04:20 +0000 (0:00:34.838) 0:00:55.005 ********* 2026-03-10 01:14:21.723783 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:21.723790 | orchestrator | 2026-03-10 01:14:21.723796 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-10 01:14:21.723802 | orchestrator | Tuesday 10 March 2026 01:04:37 +0000 (0:00:16.823) 0:01:11.829 ********* 2026-03-10 01:14:21.723808 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:14:21.723815 | orchestrator | 2026-03-10 01:14:21.723821 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-10 01:14:21.723827 | orchestrator | Tuesday 10 March 2026 01:04:53 +0000 (0:00:15.443) 0:01:27.273 ********* 2026-03-10 01:14:21.723833 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:14:21.723840 | orchestrator | 2026-03-10 01:14:21.723846 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-10 01:14:21.723852 | orchestrator | Tuesday 10 March 2026 01:04:56 +0000 (0:00:02.754) 0:01:30.028 ********* 2026-03-10 01:14:21.723858 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.723865 | orchestrator | 2026-03-10 01:14:21.723871 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-10 01:14:21.723877 | orchestrator | Tuesday 10 March 2026 01:04:56 +0000 (0:00:00.841) 0:01:30.869 ********* 2026-03-10 01:14:21.723884 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:14:21.723890 | orchestrator | 2026-03-10 01:14:21.723896 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-10 01:14:21.723903 | orchestrator | Tuesday 10 March 2026 01:04:57 +0000 (0:00:00.780) 0:01:31.650 ********* 2026-03-10 01:14:21.723909 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:14:21.723915 | orchestrator | 2026-03-10 01:14:21.723921 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-10 01:14:21.723928 | orchestrator | Tuesday 10 March 2026 01:05:18 +0000 (0:00:20.722) 0:01:52.373 ********* 2026-03-10 01:14:21.723934 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.723941 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.723946 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.723952 | orchestrator | 2026-03-10 01:14:21.723958 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-10 01:14:21.723964 | orchestrator | 2026-03-10 01:14:21.723970 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-10 01:14:21.723976 | orchestrator | Tuesday 10 March 2026 01:05:18 +0000 (0:00:00.363) 0:01:52.736 ********* 2026-03-10 01:14:21.723982 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:14:21.723987 | orchestrator | 2026-03-10 01:14:21.723993 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-10 01:14:21.723999 | orchestrator | Tuesday 10 March 2026 01:05:19 +0000 (0:00:00.626) 0:01:53.363 ********* 2026-03-10 01:14:21.724005 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.724011 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.724016 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:21.724022 | orchestrator | 2026-03-10 01:14:21.724028 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-10 01:14:21.724034 | orchestrator | Tuesday 10 March 2026 01:05:21 +0000 (0:00:02.211) 0:01:55.575 ********* 2026-03-10 01:14:21.724039 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.724045 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.724051 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:21.724057 | orchestrator | 2026-03-10 01:14:21.724063 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-10 01:14:21.724074 | orchestrator | Tuesday 10 March 2026 01:05:23 +0000 (0:00:02.450) 0:01:58.026 ********* 2026-03-10 01:14:21.724080 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.724086 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.724095 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.724101 | orchestrator | 2026-03-10 01:14:21.724121 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-10 01:14:21.724132 | orchestrator | Tuesday 10 March 2026 01:05:24 +0000 (0:00:00.415) 0:01:58.441 ********* 2026-03-10 01:14:21.724137 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-10 01:14:21.724143 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.724149 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-10 01:14:21.724155 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.724161 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-10 01:14:21.724167 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-10 01:14:21.724173 | orchestrator | 2026-03-10 01:14:21.724179 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-10 01:14:21.724185 | orchestrator | Tuesday 10 March 2026 01:05:34 +0000 (0:00:10.233) 0:02:08.675 ********* 2026-03-10 01:14:21.724191 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.724196 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.724202 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.724208 | orchestrator | 2026-03-10 01:14:21.724214 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-10 01:14:21.724220 | orchestrator | Tuesday 10 March 2026 01:05:35 +0000 (0:00:00.541) 0:02:09.216 ********* 2026-03-10 01:14:21.724226 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-10 01:14:21.724232 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.724237 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-10 01:14:21.724243 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.724249 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-10 01:14:21.724255 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.724261 | orchestrator | 2026-03-10 01:14:21.724267 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-10 01:14:21.724272 | orchestrator | Tuesday 10 March 2026 01:05:36 +0000 (0:00:01.650) 0:02:10.866 ********* 2026-03-10 01:14:21.724278 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.724284 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.724290 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:21.724296 | orchestrator | 2026-03-10 01:14:21.724302 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-10 01:14:21.724307 | orchestrator | Tuesday 10 March 2026 01:05:37 +0000 (0:00:00.847) 0:02:11.714 ********* 2026-03-10 01:14:21.724313 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.724319 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.724325 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:21.724331 | orchestrator | 2026-03-10 01:14:21.724337 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-10 01:14:21.724343 | orchestrator | Tuesday 10 March 2026 01:05:38 +0000 (0:00:01.073) 0:02:12.787 ********* 2026-03-10 01:14:21.724348 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.724354 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.724360 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:21.724366 | orchestrator | 2026-03-10 01:14:21.724372 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-10 01:14:21.724377 | orchestrator | Tuesday 10 March 2026 01:05:40 +0000 (0:00:02.143) 0:02:14.931 ********* 2026-03-10 01:14:21.724383 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.724389 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.724395 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:14:21.724400 | orchestrator | 2026-03-10 01:14:21.724406 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-10 01:14:21.724418 | orchestrator | Tuesday 10 March 2026 01:06:03 +0000 (0:00:22.680) 0:02:37.611 ********* 2026-03-10 01:14:21.724424 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.724429 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.724435 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:14:21.724441 | orchestrator | 2026-03-10 01:14:21.724447 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-10 01:14:21.724453 | orchestrator | Tuesday 10 March 2026 01:06:19 +0000 (0:00:15.488) 0:02:53.100 ********* 2026-03-10 01:14:21.724458 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:14:21.724464 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.724470 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.724476 | orchestrator | 2026-03-10 01:14:21.724482 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-10 01:14:21.724488 | orchestrator | Tuesday 10 March 2026 01:06:20 +0000 (0:00:01.602) 0:02:54.702 ********* 2026-03-10 01:14:21.724493 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.724499 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.724505 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:21.724511 | orchestrator | 2026-03-10 01:14:21.724517 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-10 01:14:21.724522 | orchestrator | Tuesday 10 March 2026 01:06:36 +0000 (0:00:15.460) 0:03:10.163 ********* 2026-03-10 01:14:21.724528 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.724534 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.724540 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.724546 | orchestrator | 2026-03-10 01:14:21.724552 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-10 01:14:21.724558 | orchestrator | Tuesday 10 March 2026 01:06:37 +0000 (0:00:01.149) 0:03:11.313 ********* 2026-03-10 01:14:21.724563 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.724569 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.724575 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.724581 | orchestrator | 2026-03-10 01:14:21.724587 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-10 01:14:21.724593 | orchestrator | 2026-03-10 01:14:21.724599 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-10 01:14:21.724604 | orchestrator | Tuesday 10 March 2026 01:06:37 +0000 (0:00:00.585) 0:03:11.898 ********* 2026-03-10 01:14:21.724610 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:14:21.724616 | orchestrator | 2026-03-10 01:14:21.724625 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-03-10 01:14:21.724631 | orchestrator | Tuesday 10 March 2026 01:06:38 +0000 (0:00:01.053) 0:03:12.952 ********* 2026-03-10 01:14:21.724643 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-10 01:14:21.724649 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-10 01:14:21.724655 | orchestrator | 2026-03-10 01:14:21.724660 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-03-10 01:14:21.724666 | orchestrator | Tuesday 10 March 2026 01:06:42 +0000 (0:00:03.572) 0:03:16.524 ********* 2026-03-10 01:14:21.724672 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-10 01:14:21.724678 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-10 01:14:21.724684 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-10 01:14:21.724690 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-10 01:14:21.724696 | orchestrator | 2026-03-10 01:14:21.724702 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-10 01:14:21.724707 | orchestrator | Tuesday 10 March 2026 01:06:49 +0000 (0:00:06.770) 0:03:23.294 ********* 2026-03-10 01:14:21.724718 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-10 01:14:21.724724 | orchestrator | 2026-03-10 01:14:21.724730 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-10 01:14:21.724736 | orchestrator | Tuesday 10 March 2026 01:06:53 +0000 (0:00:03.919) 0:03:27.214 ********* 2026-03-10 01:14:21.724742 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-10 01:14:21.724748 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-10 01:14:21.724753 | orchestrator | 2026-03-10 01:14:21.724759 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-10 01:14:21.724765 | orchestrator | Tuesday 10 March 2026 01:06:57 +0000 (0:00:04.672) 0:03:31.886 ********* 2026-03-10 01:14:21.724771 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-10 01:14:21.724777 | orchestrator | 2026-03-10 01:14:21.724782 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-03-10 01:14:21.724788 | orchestrator | Tuesday 10 March 2026 01:07:01 +0000 (0:00:03.536) 0:03:35.423 ********* 2026-03-10 01:14:21.724794 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-10 01:14:21.724800 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-10 01:14:21.724806 | orchestrator | 2026-03-10 01:14:21.724812 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-10 01:14:21.724817 | orchestrator | Tuesday 10 March 2026 01:07:09 +0000 (0:00:08.021) 0:03:43.444 ********* 2026-03-10 01:14:21.724829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-10 01:14:21.724848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-10 01:14:21.724859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-10 01:14:21.724866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.724875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.724881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.724887 | orchestrator | 2026-03-10 01:14:21.724893 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-10 01:14:21.724899 | orchestrator | Tuesday 10 March 2026 01:07:11 +0000 (0:00:01.908) 0:03:45.353 ********* 2026-03-10 01:14:21.724905 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.724911 | orchestrator | 2026-03-10 01:14:21.724917 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-10 01:14:21.724923 | orchestrator | Tuesday 10 March 2026 01:07:11 +0000 (0:00:00.206) 0:03:45.559 ********* 2026-03-10 01:14:21.724929 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.724935 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.724941 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.724946 | orchestrator | 2026-03-10 01:14:21.724952 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-10 01:14:21.724966 | orchestrator | Tuesday 10 March 2026 01:07:12 +0000 (0:00:00.521) 0:03:46.081 ********* 2026-03-10 01:14:21.724972 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-10 01:14:21.724978 | orchestrator | 2026-03-10 01:14:21.724987 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-10 01:14:21.724993 | orchestrator | Tuesday 10 March 2026 01:07:14 +0000 (0:00:02.189) 0:03:48.270 ********* 2026-03-10 01:14:21.724999 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.725005 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.725011 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.725017 | orchestrator | 2026-03-10 01:14:21.725022 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-10 01:14:21.725028 | orchestrator | Tuesday 10 March 2026 01:07:15 +0000 (0:00:01.346) 0:03:49.617 ********* 2026-03-10 01:14:21.725034 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:14:21.725040 | orchestrator | 2026-03-10 01:14:21.725046 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-10 01:14:21.725052 | orchestrator | Tuesday 10 March 2026 01:07:16 +0000 (0:00:00.849) 0:03:50.466 ********* 2026-03-10 01:14:21.725058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-10 01:14:21.725065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-10 01:14:21.725071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.725141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-10 01:14:21.725149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.725155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.725162 | orchestrator | 2026-03-10 01:14:21.725168 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-10 01:14:21.725174 | orchestrator | Tuesday 10 March 2026 01:07:21 +0000 (0:00:04.607) 0:03:55.073 ********* 2026-03-10 01:14:21.725180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-10 01:14:21.725201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-10 01:14:21.725208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 01:14:21.725215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 01:14:21.725221 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.725227 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.725233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-10 01:14:21.725244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 01:14:21.725250 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.725256 | orchestrator | 2026-03-10 01:14:21.725262 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-10 01:14:21.725271 | orchestrator | Tuesday 10 March 2026 01:07:21 +0000 (0:00:00.796) 0:03:55.869 ********* 2026-03-10 01:14:21.725281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-10 01:14:21.725288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 01:14:21.725294 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.725300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-10 01:14:21.725311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 01:14:21.725317 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.725334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-10 01:14:21.725341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 01:14:21.725347 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.725353 | orchestrator | 2026-03-10 01:14:21.725358 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-10 01:14:21.725365 | orchestrator | Tuesday 10 March 2026 01:07:23 +0000 (0:00:01.456) 0:03:57.326 ********* 2026-03-10 01:14:21.725371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-10 01:14:21.725389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-10 01:14:21.725397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-10 01:14:21.725403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.725409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.725420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.725426 | orchestrator | 2026-03-10 01:14:21.725432 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-10 01:14:21.725438 | orchestrator | Tuesday 10 March 2026 01:07:26 +0000 (0:00:03.453) 0:04:00.780 ********* 2026-03-10 01:14:21.725452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-10 01:14:21.725459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-10 01:14:21.725466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-10 01:14:21.725476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.725487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.725497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.725503 | orchestrator | 2026-03-10 01:14:21.725509 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-10 01:14:21.725515 | orchestrator | Tuesday 10 March 2026 01:07:42 +0000 (0:00:15.938) 0:04:16.719 ********* 2026-03-10 01:14:21.725521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-10 01:14:21.725533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 01:14:21.725539 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.725545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-10 01:14:21.725562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 01:14:21.725569 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.725575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-10 01:14:21.725581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 01:14:21.725591 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.725597 | orchestrator | 2026-03-10 01:14:21.725603 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-10 01:14:21.725609 | orchestrator | Tuesday 10 March 2026 01:07:43 +0000 (0:00:00.795) 0:04:17.514 ********* 2026-03-10 01:14:21.725615 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:14:21.725621 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:21.725626 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:14:21.725632 | orchestrator | 2026-03-10 01:14:21.725638 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-03-10 01:14:21.725644 | orchestrator | Tuesday 10 March 2026 01:07:45 +0000 (0:00:02.328) 0:04:19.843 ********* 2026-03-10 01:14:21.725650 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.725655 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.725661 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.725667 | orchestrator | 2026-03-10 01:14:21.725673 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-03-10 01:14:21.725678 | orchestrator | Tuesday 10 March 2026 01:07:46 +0000 (0:00:01.036) 0:04:20.880 ********* 2026-03-10 01:14:21.725694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-10 01:14:21.725701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-10 01:14:21.725712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.725718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.725724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-10 01:14:21.725739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.725746 | orchestrator | 2026-03-10 01:14:21.725751 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-10 01:14:21.725757 | orchestrator | Tuesday 10 March 2026 01:07:50 +0000 (0:00:03.858) 0:04:24.738 ********* 2026-03-10 01:14:21.725763 | orchestrator | 2026-03-10 01:14:21.725769 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-10 01:14:21.725775 | orchestrator | Tuesday 10 March 2026 01:07:50 +0000 (0:00:00.276) 0:04:25.015 ********* 2026-03-10 01:14:21.725781 | orchestrator | 2026-03-10 01:14:21.725787 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-10 01:14:21.725792 | orchestrator | Tuesday 10 March 2026 01:07:51 +0000 (0:00:00.170) 0:04:25.185 ********* 2026-03-10 01:14:21.725802 | orchestrator | 2026-03-10 01:14:21.725808 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-10 01:14:21.725814 | orchestrator | Tuesday 10 March 2026 01:07:51 +0000 (0:00:00.173) 0:04:25.358 ********* 2026-03-10 01:14:21.725820 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:21.725826 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:14:21.725831 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:14:21.725837 | orchestrator | 2026-03-10 01:14:21.725843 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-10 01:14:21.725849 | orchestrator | Tuesday 10 March 2026 01:08:10 +0000 (0:00:19.560) 0:04:44.919 ********* 2026-03-10 01:14:21.725854 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:21.725860 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:14:21.725866 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:14:21.725872 | orchestrator | 2026-03-10 01:14:21.725878 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-10 01:14:21.725883 | orchestrator | 2026-03-10 01:14:21.725889 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-10 01:14:21.725895 | orchestrator | Tuesday 10 March 2026 01:08:25 +0000 (0:00:14.156) 0:04:59.076 ********* 2026-03-10 01:14:21.725901 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:14:21.725907 | orchestrator | 2026-03-10 01:14:21.725913 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-10 01:14:21.725919 | orchestrator | Tuesday 10 March 2026 01:08:27 +0000 (0:00:02.801) 0:05:01.878 ********* 2026-03-10 01:14:21.725925 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:14:21.725930 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:14:21.725936 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:14:21.725942 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.725948 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.725953 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.725959 | orchestrator | 2026-03-10 01:14:21.725965 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-10 01:14:21.725971 | orchestrator | Tuesday 10 March 2026 01:08:29 +0000 (0:00:01.880) 0:05:03.758 ********* 2026-03-10 01:14:21.725977 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.725983 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.725988 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.725994 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:14:21.726000 | orchestrator | 2026-03-10 01:14:21.726006 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-10 01:14:21.726012 | orchestrator | Tuesday 10 March 2026 01:08:32 +0000 (0:00:02.848) 0:05:06.607 ********* 2026-03-10 01:14:21.726074 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-10 01:14:21.726080 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-10 01:14:21.726087 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-10 01:14:21.726094 | orchestrator | 2026-03-10 01:14:21.726100 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-10 01:14:21.726273 | orchestrator | Tuesday 10 March 2026 01:08:33 +0000 (0:00:01.123) 0:05:07.731 ********* 2026-03-10 01:14:21.726402 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-10 01:14:21.726419 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-10 01:14:21.726432 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-10 01:14:21.726444 | orchestrator | 2026-03-10 01:14:21.726457 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-10 01:14:21.726469 | orchestrator | Tuesday 10 March 2026 01:08:35 +0000 (0:00:01.891) 0:05:09.623 ********* 2026-03-10 01:14:21.726480 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-10 01:14:21.726491 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:14:21.726535 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-10 01:14:21.726545 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:14:21.726555 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-10 01:14:21.726565 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:14:21.726574 | orchestrator | 2026-03-10 01:14:21.726584 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-10 01:14:21.726594 | orchestrator | Tuesday 10 March 2026 01:08:36 +0000 (0:00:00.942) 0:05:10.565 ********* 2026-03-10 01:14:21.726604 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-10 01:14:21.726641 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-10 01:14:21.726677 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-10 01:14:21.726697 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-10 01:14:21.726715 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.726732 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-10 01:14:21.726748 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-10 01:14:21.726763 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-10 01:14:21.726772 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.726782 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-10 01:14:21.726791 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-10 01:14:21.726801 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.726810 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-10 01:14:21.726820 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-10 01:14:21.726829 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-10 01:14:21.726839 | orchestrator | 2026-03-10 01:14:21.726848 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-10 01:14:21.726858 | orchestrator | Tuesday 10 March 2026 01:08:37 +0000 (0:00:01.415) 0:05:11.981 ********* 2026-03-10 01:14:21.726867 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.726877 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.726887 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.726896 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:14:21.726906 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:14:21.726915 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:14:21.726925 | orchestrator | 2026-03-10 01:14:21.726934 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-10 01:14:21.726944 | orchestrator | Tuesday 10 March 2026 01:08:39 +0000 (0:00:01.407) 0:05:13.389 ********* 2026-03-10 01:14:21.726953 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.726963 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.726972 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.726982 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:14:21.726991 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:14:21.727000 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:14:21.727010 | orchestrator | 2026-03-10 01:14:21.727019 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-10 01:14:21.727029 | orchestrator | Tuesday 10 March 2026 01:08:41 +0000 (0:00:02.210) 0:05:15.599 ********* 2026-03-10 01:14:21.727068 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-10 01:14:21.727093 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-10 01:14:21.727178 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-10 01:14:21.727205 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-10 01:14:21.727223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-10 01:14:21.727239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-10 01:14:21.727266 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.727284 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-10 01:14:21.727321 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-10 01:14:21.727341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-10 01:14:21.727352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.727362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.727378 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.727390 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.727407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.727418 | orchestrator | 2026-03-10 01:14:21.727432 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-10 01:14:21.727442 | orchestrator | Tuesday 10 March 2026 01:08:45 +0000 (0:00:03.808) 0:05:19.408 ********* 2026-03-10 01:14:21.727454 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:14:21.727466 | orchestrator | 2026-03-10 01:14:21.727476 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-10 01:14:21.727486 | orchestrator | Tuesday 10 March 2026 01:08:47 +0000 (0:00:02.290) 0:05:21.699 ********* 2026-03-10 01:14:21.727496 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-10 01:14:21.727507 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-10 01:14:21.727523 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-10 01:14:21.727533 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-10 01:14:21.727563 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-10 01:14:21.727574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-10 01:14:21.727584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-10 01:14:21.727600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-10 01:14:21.727610 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.727620 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-10 01:14:21.727656 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.727680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.727697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.727726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.727742 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.727760 | orchestrator | 2026-03-10 01:14:21.727770 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-10 01:14:21.727780 | orchestrator | Tuesday 10 March 2026 01:08:53 +0000 (0:00:05.923) 0:05:27.622 ********* 2026-03-10 01:14:21.727790 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-10 01:14:21.727816 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-10 01:14:21.727827 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-10 01:14:21.727844 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:14:21.727854 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-10 01:14:21.727864 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-10 01:14:21.727874 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-10 01:14:21.727884 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:14:21.727905 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-10 01:14:21.727915 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-10 01:14:21.727932 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-10 01:14:21.727942 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:14:21.727952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-10 01:14:21.727962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-10 01:14:21.727972 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.727982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-10 01:14:21.728003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-10 01:14:21.728013 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.728023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-10 01:14:21.728039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-10 01:14:21.728049 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.728059 | orchestrator | 2026-03-10 01:14:21.728069 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-10 01:14:21.728079 | orchestrator | Tuesday 10 March 2026 01:08:58 +0000 (0:00:04.591) 0:05:32.213 ********* 2026-03-10 01:14:21.728089 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-10 01:14:21.728099 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-10 01:14:21.728150 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-10 01:14:21.728168 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:14:21.728201 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-10 01:14:21.728228 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-10 01:14:21.728244 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-10 01:14:21.728261 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:14:21.728278 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-10 01:14:21.728295 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-10 01:14:21.728329 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-10 01:14:21.728356 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:14:21.728371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-10 01:14:21.728386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-10 01:14:21.728400 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.728415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-10 01:14:21.728429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-10 01:14:21.728445 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.728461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-10 01:14:21.728478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-10 01:14:21.728511 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.728528 | orchestrator | 2026-03-10 01:14:21.728545 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-10 01:14:21.728572 | orchestrator | Tuesday 10 March 2026 01:09:02 +0000 (0:00:04.190) 0:05:36.404 ********* 2026-03-10 01:14:21.728589 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.728606 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.728622 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.728640 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:14:21.728656 | orchestrator | 2026-03-10 01:14:21.728672 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-10 01:14:21.728689 | orchestrator | Tuesday 10 March 2026 01:09:03 +0000 (0:00:01.046) 0:05:37.451 ********* 2026-03-10 01:14:21.728706 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-10 01:14:21.728722 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-10 01:14:21.728738 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-10 01:14:21.728760 | orchestrator | 2026-03-10 01:14:21.728777 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-10 01:14:21.728794 | orchestrator | Tuesday 10 March 2026 01:09:05 +0000 (0:00:01.659) 0:05:39.111 ********* 2026-03-10 01:14:21.728810 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-10 01:14:21.728825 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-10 01:14:21.728835 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-10 01:14:21.728845 | orchestrator | 2026-03-10 01:14:21.728855 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-10 01:14:21.728864 | orchestrator | Tuesday 10 March 2026 01:09:06 +0000 (0:00:01.222) 0:05:40.333 ********* 2026-03-10 01:14:21.728874 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:14:21.728884 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:14:21.728894 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:14:21.728903 | orchestrator | 2026-03-10 01:14:21.728913 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-10 01:14:21.728922 | orchestrator | Tuesday 10 March 2026 01:09:07 +0000 (0:00:00.827) 0:05:41.160 ********* 2026-03-10 01:14:21.728932 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:14:21.728941 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:14:21.728951 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:14:21.728961 | orchestrator | 2026-03-10 01:14:21.728970 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-10 01:14:21.728980 | orchestrator | Tuesday 10 March 2026 01:09:08 +0000 (0:00:01.272) 0:05:42.433 ********* 2026-03-10 01:14:21.728989 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-10 01:14:21.728999 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-10 01:14:21.729008 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-10 01:14:21.729018 | orchestrator | 2026-03-10 01:14:21.729027 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-10 01:14:21.729037 | orchestrator | Tuesday 10 March 2026 01:09:09 +0000 (0:00:01.554) 0:05:43.988 ********* 2026-03-10 01:14:21.729047 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-10 01:14:21.729056 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-10 01:14:21.729066 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-10 01:14:21.729076 | orchestrator | 2026-03-10 01:14:21.729086 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-10 01:14:21.729096 | orchestrator | Tuesday 10 March 2026 01:09:12 +0000 (0:00:02.204) 0:05:46.193 ********* 2026-03-10 01:14:21.729133 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-10 01:14:21.729147 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-10 01:14:21.729157 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-10 01:14:21.729178 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-10 01:14:21.729188 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-10 01:14:21.729198 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-10 01:14:21.729207 | orchestrator | 2026-03-10 01:14:21.729217 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-10 01:14:21.729227 | orchestrator | Tuesday 10 March 2026 01:09:19 +0000 (0:00:07.334) 0:05:53.528 ********* 2026-03-10 01:14:21.729236 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:14:21.729246 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:14:21.729256 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:14:21.729272 | orchestrator | 2026-03-10 01:14:21.729287 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-10 01:14:21.729302 | orchestrator | Tuesday 10 March 2026 01:09:20 +0000 (0:00:01.132) 0:05:54.661 ********* 2026-03-10 01:14:21.729319 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:14:21.729336 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:14:21.729352 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:14:21.729365 | orchestrator | 2026-03-10 01:14:21.729375 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-10 01:14:21.729385 | orchestrator | Tuesday 10 March 2026 01:09:20 +0000 (0:00:00.348) 0:05:55.010 ********* 2026-03-10 01:14:21.729394 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:14:21.729404 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:14:21.729413 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:14:21.729423 | orchestrator | 2026-03-10 01:14:21.729432 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-10 01:14:21.729442 | orchestrator | Tuesday 10 March 2026 01:09:23 +0000 (0:00:02.477) 0:05:57.487 ********* 2026-03-10 01:14:21.729453 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-10 01:14:21.729464 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-10 01:14:21.729483 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-10 01:14:21.729501 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-10 01:14:21.729511 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-10 01:14:21.729521 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-10 01:14:21.729530 | orchestrator | 2026-03-10 01:14:21.729540 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-10 01:14:21.729550 | orchestrator | Tuesday 10 March 2026 01:09:30 +0000 (0:00:07.015) 0:06:04.502 ********* 2026-03-10 01:14:21.729559 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-10 01:14:21.729569 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-10 01:14:21.729578 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-10 01:14:21.729588 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-10 01:14:21.729598 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:14:21.729607 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-10 01:14:21.729617 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:14:21.729628 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-10 01:14:21.729645 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:14:21.729661 | orchestrator | 2026-03-10 01:14:21.729695 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-10 01:14:21.729725 | orchestrator | Tuesday 10 March 2026 01:09:34 +0000 (0:00:03.710) 0:06:08.213 ********* 2026-03-10 01:14:21.729753 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:14:21.729770 | orchestrator | 2026-03-10 01:14:21.729785 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-10 01:14:21.729801 | orchestrator | Tuesday 10 March 2026 01:09:34 +0000 (0:00:00.200) 0:06:08.413 ********* 2026-03-10 01:14:21.729817 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:14:21.729833 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:14:21.729850 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:14:21.729867 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.729884 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.729901 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.729916 | orchestrator | 2026-03-10 01:14:21.729931 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-10 01:14:21.729949 | orchestrator | Tuesday 10 March 2026 01:09:35 +0000 (0:00:00.702) 0:06:09.115 ********* 2026-03-10 01:14:21.729965 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-10 01:14:21.729982 | orchestrator | 2026-03-10 01:14:21.729999 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-10 01:14:21.730074 | orchestrator | Tuesday 10 March 2026 01:09:35 +0000 (0:00:00.788) 0:06:09.904 ********* 2026-03-10 01:14:21.730089 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:14:21.730099 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:14:21.730132 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:14:21.730144 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.730154 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.730163 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.730173 | orchestrator | 2026-03-10 01:14:21.730182 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-10 01:14:21.730192 | orchestrator | Tuesday 10 March 2026 01:09:36 +0000 (0:00:01.003) 0:06:10.907 ********* 2026-03-10 01:14:21.730203 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-10 01:14:21.730215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-10 01:14:21.730246 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-10 01:14:21.730266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-10 01:14:21.730276 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-10 01:14:21.730286 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-10 01:14:21.730297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-10 01:14:21.730311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.730340 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-10 01:14:21.730366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.730384 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.730400 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-10 01:14:21.730411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.730421 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.730439 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.730457 | orchestrator | 2026-03-10 01:14:21.730467 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-10 01:14:21.730477 | orchestrator | Tuesday 10 March 2026 01:09:43 +0000 (0:00:06.217) 0:06:17.125 ********* 2026-03-10 01:14:21.730518 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-10 01:14:21.730530 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-10 01:14:21.730540 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-10 01:14:21.730550 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-10 01:14:21.730572 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-10 01:14:21.730589 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-10 01:14:21.730600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-10 01:14:21.730610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-10 01:14:21.730620 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.730636 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.730682 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.730701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-10 01:14:21.730719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.730736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.730753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.730769 | orchestrator | 2026-03-10 01:14:21.730786 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-10 01:14:21.730802 | orchestrator | Tuesday 10 March 2026 01:09:50 +0000 (0:00:07.341) 0:06:24.466 ********* 2026-03-10 01:14:21.730818 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:14:21.730828 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:14:21.730838 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.730848 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:14:21.730857 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.730876 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.730888 | orchestrator | 2026-03-10 01:14:21.730904 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-10 01:14:21.730920 | orchestrator | Tuesday 10 March 2026 01:09:52 +0000 (0:00:02.413) 0:06:26.880 ********* 2026-03-10 01:14:21.730936 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-10 01:14:21.730952 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-10 01:14:21.730968 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-10 01:14:21.730984 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-10 01:14:21.731002 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-10 01:14:21.731027 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.731041 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-10 01:14:21.731060 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-10 01:14:21.731070 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-10 01:14:21.731079 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.731090 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-10 01:14:21.731131 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.731149 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-10 01:14:21.731164 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-10 01:14:21.731181 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-10 01:14:21.731197 | orchestrator | 2026-03-10 01:14:21.731214 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-10 01:14:21.731231 | orchestrator | Tuesday 10 March 2026 01:09:58 +0000 (0:00:05.303) 0:06:32.184 ********* 2026-03-10 01:14:21.731246 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:14:21.731261 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:14:21.731271 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:14:21.731281 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.731290 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.731300 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.731310 | orchestrator | 2026-03-10 01:14:21.731319 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-10 01:14:21.731330 | orchestrator | Tuesday 10 March 2026 01:09:58 +0000 (0:00:00.633) 0:06:32.817 ********* 2026-03-10 01:14:21.731339 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-10 01:14:21.731349 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-10 01:14:21.731359 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-10 01:14:21.731369 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-10 01:14:21.731378 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-10 01:14:21.731388 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-10 01:14:21.731398 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-10 01:14:21.731407 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-10 01:14:21.731426 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-10 01:14:21.731436 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-10 01:14:21.731445 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.731458 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-10 01:14:21.731474 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.731490 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-10 01:14:21.731505 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.731521 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-10 01:14:21.731538 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-10 01:14:21.731555 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-10 01:14:21.731570 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-10 01:14:21.731585 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-10 01:14:21.731602 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-10 01:14:21.731620 | orchestrator | 2026-03-10 01:14:21.731637 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-10 01:14:21.731654 | orchestrator | Tuesday 10 March 2026 01:10:05 +0000 (0:00:06.838) 0:06:39.656 ********* 2026-03-10 01:14:21.731670 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-10 01:14:21.731687 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-10 01:14:21.731703 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-10 01:14:21.731728 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-10 01:14:21.731753 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-10 01:14:21.731770 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-10 01:14:21.731786 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-10 01:14:21.731799 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-10 01:14:21.731809 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-10 01:14:21.731819 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-10 01:14:21.731828 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-10 01:14:21.731838 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-10 01:14:21.731847 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-10 01:14:21.731857 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-10 01:14:21.731867 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-10 01:14:21.731876 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.731886 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-10 01:14:21.731895 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-10 01:14:21.731913 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.731923 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-10 01:14:21.731932 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.731942 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-10 01:14:21.731951 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-10 01:14:21.731961 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-10 01:14:21.731970 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-10 01:14:21.731980 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-10 01:14:21.731990 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-10 01:14:21.731999 | orchestrator | 2026-03-10 01:14:21.732009 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-10 01:14:21.732018 | orchestrator | Tuesday 10 March 2026 01:10:13 +0000 (0:00:07.865) 0:06:47.521 ********* 2026-03-10 01:14:21.732028 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:14:21.732038 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:14:21.732047 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:14:21.732057 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.732067 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.732076 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.732086 | orchestrator | 2026-03-10 01:14:21.732095 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-10 01:14:21.732128 | orchestrator | Tuesday 10 March 2026 01:10:14 +0000 (0:00:00.927) 0:06:48.449 ********* 2026-03-10 01:14:21.732146 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:14:21.732163 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:14:21.732180 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:14:21.732195 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.732212 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.732229 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.732244 | orchestrator | 2026-03-10 01:14:21.732262 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-10 01:14:21.732272 | orchestrator | Tuesday 10 March 2026 01:10:15 +0000 (0:00:00.897) 0:06:49.346 ********* 2026-03-10 01:14:21.732282 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.732291 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.732301 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.732310 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:14:21.732320 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:14:21.732329 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:14:21.732339 | orchestrator | 2026-03-10 01:14:21.732349 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-10 01:14:21.732358 | orchestrator | Tuesday 10 March 2026 01:10:17 +0000 (0:00:02.465) 0:06:51.812 ********* 2026-03-10 01:14:21.732377 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-10 01:14:21.732402 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-10 01:14:21.732413 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-10 01:14:21.732423 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:14:21.732433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-10 01:14:21.732444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-10 01:14:21.732454 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.732464 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-10 01:14:21.732486 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-10 01:14:21.732503 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-10 01:14:21.732514 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:14:21.732524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-10 01:14:21.732534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-10 01:14:21.732544 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.732554 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-10 01:14:21.732564 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-10 01:14:21.732587 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-10 01:14:21.732604 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:14:21.732614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-10 01:14:21.732625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-10 01:14:21.732643 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.732659 | orchestrator | 2026-03-10 01:14:21.732675 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-10 01:14:21.732692 | orchestrator | Tuesday 10 March 2026 01:10:19 +0000 (0:00:01.805) 0:06:53.617 ********* 2026-03-10 01:14:21.732709 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-10 01:14:21.732724 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-10 01:14:21.732739 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:14:21.732753 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-10 01:14:21.732767 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-10 01:14:21.732782 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:14:21.732798 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-10 01:14:21.732814 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-10 01:14:21.732829 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:14:21.732845 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-10 01:14:21.732860 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-10 01:14:21.732875 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.732890 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-10 01:14:21.732906 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-10 01:14:21.732921 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.732935 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-10 01:14:21.732953 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-10 01:14:21.732970 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.732985 | orchestrator | 2026-03-10 01:14:21.733000 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-03-10 01:14:21.733028 | orchestrator | Tuesday 10 March 2026 01:10:20 +0000 (0:00:01.029) 0:06:54.647 ********* 2026-03-10 01:14:21.733047 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-10 01:14:21.733087 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-10 01:14:21.733133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-10 01:14:21.733152 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-10 01:14:21.733170 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-10 01:14:21.733199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-10 01:14:21.733216 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-10 01:14:21.733245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-10 01:14:21.733257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.733267 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-10 01:14:21.733277 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.733287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.733310 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.733348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.733367 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:21.733384 | orchestrator | 2026-03-10 01:14:21.733401 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-10 01:14:21.733417 | orchestrator | Tuesday 10 March 2026 01:10:23 +0000 (0:00:03.348) 0:06:57.995 ********* 2026-03-10 01:14:21.733433 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:14:21.733443 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:14:21.733453 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:14:21.733463 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.733472 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.733482 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.733491 | orchestrator | 2026-03-10 01:14:21.733500 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-10 01:14:21.733510 | orchestrator | Tuesday 10 March 2026 01:10:25 +0000 (0:00:01.263) 0:06:59.258 ********* 2026-03-10 01:14:21.733520 | orchestrator | 2026-03-10 01:14:21.733529 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-10 01:14:21.733539 | orchestrator | Tuesday 10 March 2026 01:10:25 +0000 (0:00:00.355) 0:06:59.614 ********* 2026-03-10 01:14:21.733548 | orchestrator | 2026-03-10 01:14:21.733558 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-10 01:14:21.733577 | orchestrator | Tuesday 10 March 2026 01:10:25 +0000 (0:00:00.295) 0:06:59.910 ********* 2026-03-10 01:14:21.733587 | orchestrator | 2026-03-10 01:14:21.733603 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-10 01:14:21.733619 | orchestrator | Tuesday 10 March 2026 01:10:26 +0000 (0:00:00.157) 0:07:00.067 ********* 2026-03-10 01:14:21.733635 | orchestrator | 2026-03-10 01:14:21.733650 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-10 01:14:21.733666 | orchestrator | Tuesday 10 March 2026 01:10:26 +0000 (0:00:00.320) 0:07:00.387 ********* 2026-03-10 01:14:21.733681 | orchestrator | 2026-03-10 01:14:21.733695 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-10 01:14:21.733708 | orchestrator | Tuesday 10 March 2026 01:10:26 +0000 (0:00:00.305) 0:07:00.693 ********* 2026-03-10 01:14:21.733722 | orchestrator | 2026-03-10 01:14:21.733737 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-10 01:14:21.733752 | orchestrator | Tuesday 10 March 2026 01:10:27 +0000 (0:00:00.829) 0:07:01.523 ********* 2026-03-10 01:14:21.733766 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:14:21.733782 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:21.733797 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:14:21.733812 | orchestrator | 2026-03-10 01:14:21.733829 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-10 01:14:21.733844 | orchestrator | Tuesday 10 March 2026 01:10:41 +0000 (0:00:14.256) 0:07:15.779 ********* 2026-03-10 01:14:21.733866 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:21.733881 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:14:21.733898 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:14:21.733914 | orchestrator | 2026-03-10 01:14:21.733928 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-10 01:14:21.733943 | orchestrator | Tuesday 10 March 2026 01:10:56 +0000 (0:00:14.907) 0:07:30.686 ********* 2026-03-10 01:14:21.733958 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:14:21.733973 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:14:21.733989 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:14:21.734006 | orchestrator | 2026-03-10 01:14:21.734091 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-10 01:14:21.734103 | orchestrator | Tuesday 10 March 2026 01:11:54 +0000 (0:00:57.605) 0:08:28.292 ********* 2026-03-10 01:14:21.734145 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:14:21.734156 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:14:21.734165 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:14:21.734175 | orchestrator | 2026-03-10 01:14:21.734185 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-10 01:14:21.734195 | orchestrator | Tuesday 10 March 2026 01:12:32 +0000 (0:00:38.389) 0:09:06.681 ********* 2026-03-10 01:14:21.734205 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:14:21.734214 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:14:21.734224 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:14:21.734234 | orchestrator | 2026-03-10 01:14:21.734243 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-10 01:14:21.734265 | orchestrator | Tuesday 10 March 2026 01:12:33 +0000 (0:00:00.830) 0:09:07.512 ********* 2026-03-10 01:14:21.734275 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:14:21.734293 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:14:21.734303 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:14:21.734312 | orchestrator | 2026-03-10 01:14:21.734322 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-10 01:14:21.734332 | orchestrator | Tuesday 10 March 2026 01:12:34 +0000 (0:00:00.827) 0:09:08.340 ********* 2026-03-10 01:14:21.734341 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:14:21.734351 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:14:21.734360 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:14:21.734370 | orchestrator | 2026-03-10 01:14:21.734379 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-10 01:14:21.734399 | orchestrator | Tuesday 10 March 2026 01:12:58 +0000 (0:00:23.949) 0:09:32.289 ********* 2026-03-10 01:14:21.734409 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:14:21.734418 | orchestrator | 2026-03-10 01:14:21.734428 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-10 01:14:21.734437 | orchestrator | Tuesday 10 March 2026 01:12:58 +0000 (0:00:00.125) 0:09:32.414 ********* 2026-03-10 01:14:21.734447 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.734456 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.734466 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:14:21.734475 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:14:21.734485 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.734495 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-10 01:14:21.734506 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-10 01:14:21.734515 | orchestrator | 2026-03-10 01:14:21.734525 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-10 01:14:21.734535 | orchestrator | Tuesday 10 March 2026 01:13:23 +0000 (0:00:24.774) 0:09:57.189 ********* 2026-03-10 01:14:21.734544 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:14:21.734554 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:14:21.734563 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:14:21.734573 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.734582 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.734592 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.734601 | orchestrator | 2026-03-10 01:14:21.734611 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-10 01:14:21.734620 | orchestrator | Tuesday 10 March 2026 01:13:36 +0000 (0:00:13.820) 0:10:11.009 ********* 2026-03-10 01:14:21.734634 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:14:21.734650 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:14:21.734665 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.734681 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.734698 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.734715 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2026-03-10 01:14:21.734731 | orchestrator | 2026-03-10 01:14:21.734744 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-10 01:14:21.734754 | orchestrator | Tuesday 10 March 2026 01:13:41 +0000 (0:00:04.881) 0:10:15.891 ********* 2026-03-10 01:14:21.734764 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-10 01:14:21.734773 | orchestrator | 2026-03-10 01:14:21.734783 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-10 01:14:21.734792 | orchestrator | Tuesday 10 March 2026 01:13:57 +0000 (0:00:15.202) 0:10:31.093 ********* 2026-03-10 01:14:21.734802 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-10 01:14:21.734811 | orchestrator | 2026-03-10 01:14:21.734821 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-10 01:14:21.734830 | orchestrator | Tuesday 10 March 2026 01:13:58 +0000 (0:00:01.390) 0:10:32.484 ********* 2026-03-10 01:14:21.734840 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:14:21.734849 | orchestrator | 2026-03-10 01:14:21.734858 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-10 01:14:21.734868 | orchestrator | Tuesday 10 March 2026 01:13:59 +0000 (0:00:01.379) 0:10:33.864 ********* 2026-03-10 01:14:21.734877 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-10 01:14:21.734887 | orchestrator | 2026-03-10 01:14:21.734896 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-03-10 01:14:21.734906 | orchestrator | Tuesday 10 March 2026 01:14:12 +0000 (0:00:13.025) 0:10:46.890 ********* 2026-03-10 01:14:21.734915 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:14:21.734932 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:14:21.734941 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:14:21.734951 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:14:21.734960 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:14:21.734970 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:14:21.734979 | orchestrator | 2026-03-10 01:14:21.734989 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-10 01:14:21.734998 | orchestrator | 2026-03-10 01:14:21.735008 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-10 01:14:21.735017 | orchestrator | Tuesday 10 March 2026 01:14:15 +0000 (0:00:02.167) 0:10:49.058 ********* 2026-03-10 01:14:21.735026 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:21.735036 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:14:21.735046 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:14:21.735055 | orchestrator | 2026-03-10 01:14:21.735065 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-10 01:14:21.735074 | orchestrator | 2026-03-10 01:14:21.735084 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-10 01:14:21.735093 | orchestrator | Tuesday 10 March 2026 01:14:16 +0000 (0:00:01.628) 0:10:50.686 ********* 2026-03-10 01:14:21.735103 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.735144 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.735154 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.735163 | orchestrator | 2026-03-10 01:14:21.735173 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-10 01:14:21.735182 | orchestrator | 2026-03-10 01:14:21.735198 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-10 01:14:21.735214 | orchestrator | Tuesday 10 March 2026 01:14:17 +0000 (0:00:00.615) 0:10:51.302 ********* 2026-03-10 01:14:21.735224 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-10 01:14:21.735234 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-10 01:14:21.735244 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-10 01:14:21.735253 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-10 01:14:21.735263 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-10 01:14:21.735273 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-10 01:14:21.735282 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:14:21.735292 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-10 01:14:21.735302 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-10 01:14:21.735311 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-10 01:14:21.735321 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-10 01:14:21.735330 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-10 01:14:21.735340 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-10 01:14:21.735349 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:14:21.735359 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-10 01:14:21.735368 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-10 01:14:21.735378 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-10 01:14:21.735387 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-10 01:14:21.735396 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-10 01:14:21.735406 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-10 01:14:21.735416 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:14:21.735425 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-10 01:14:21.735435 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-10 01:14:21.735444 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-10 01:14:21.735461 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-10 01:14:21.735470 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-10 01:14:21.735480 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-10 01:14:21.735489 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.735499 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-10 01:14:21.735508 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-10 01:14:21.735518 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-10 01:14:21.735528 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-10 01:14:21.735537 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-10 01:14:21.735546 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-10 01:14:21.735556 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.735565 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-10 01:14:21.735575 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-10 01:14:21.735584 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-10 01:14:21.735594 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-10 01:14:21.735725 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-10 01:14:21.735745 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-10 01:14:21.735758 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.735768 | orchestrator | 2026-03-10 01:14:21.735778 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-10 01:14:21.735788 | orchestrator | 2026-03-10 01:14:21.735797 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-10 01:14:21.735807 | orchestrator | Tuesday 10 March 2026 01:14:18 +0000 (0:00:01.643) 0:10:52.945 ********* 2026-03-10 01:14:21.735816 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-10 01:14:21.735826 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-10 01:14:21.735836 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.735845 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-10 01:14:21.735855 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-10 01:14:21.735864 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.735873 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-10 01:14:21.735883 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-10 01:14:21.735892 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.735902 | orchestrator | 2026-03-10 01:14:21.735911 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-10 01:14:21.735921 | orchestrator | 2026-03-10 01:14:21.735930 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-10 01:14:21.735940 | orchestrator | Tuesday 10 March 2026 01:14:19 +0000 (0:00:00.896) 0:10:53.842 ********* 2026-03-10 01:14:21.735949 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.735993 | orchestrator | 2026-03-10 01:14:21.736004 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-10 01:14:21.736014 | orchestrator | 2026-03-10 01:14:21.736023 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-10 01:14:21.736033 | orchestrator | Tuesday 10 March 2026 01:14:20 +0000 (0:00:00.720) 0:10:54.562 ********* 2026-03-10 01:14:21.736043 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:21.736052 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:21.736069 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:21.736079 | orchestrator | 2026-03-10 01:14:21.736089 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:14:21.736105 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:14:21.736160 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-03-10 01:14:21.736171 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-03-10 01:14:21.736180 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-03-10 01:14:21.736190 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-10 01:14:21.736200 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-10 01:14:21.736209 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-10 01:14:21.736218 | orchestrator | 2026-03-10 01:14:21.736228 | orchestrator | 2026-03-10 01:14:21.736237 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:14:21.736247 | orchestrator | Tuesday 10 March 2026 01:14:21 +0000 (0:00:00.491) 0:10:55.053 ********* 2026-03-10 01:14:21.736256 | orchestrator | =============================================================================== 2026-03-10 01:14:21.736266 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 57.61s 2026-03-10 01:14:21.736276 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 38.39s 2026-03-10 01:14:21.736285 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 34.84s 2026-03-10 01:14:21.736294 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 24.77s 2026-03-10 01:14:21.736304 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 23.95s 2026-03-10 01:14:21.736313 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 22.68s 2026-03-10 01:14:21.736322 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 20.72s 2026-03-10 01:14:21.736332 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 19.56s 2026-03-10 01:14:21.736341 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 16.82s 2026-03-10 01:14:21.736351 | orchestrator | nova : Copying over nova.conf ------------------------------------------ 15.94s 2026-03-10 01:14:21.736360 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 15.49s 2026-03-10 01:14:21.736369 | orchestrator | nova-cell : Create cell ------------------------------------------------ 15.46s 2026-03-10 01:14:21.736379 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 15.44s 2026-03-10 01:14:21.736388 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 15.20s 2026-03-10 01:14:21.736398 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 14.91s 2026-03-10 01:14:21.736407 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 14.26s 2026-03-10 01:14:21.736417 | orchestrator | nova : Restart nova-api container -------------------------------------- 14.16s 2026-03-10 01:14:21.736426 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 13.82s 2026-03-10 01:14:21.736435 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 13.03s 2026-03-10 01:14:21.736445 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------ 10.23s 2026-03-10 01:14:21.736454 | orchestrator | 2026-03-10 01:14:21 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:14:21.736464 | orchestrator | 2026-03-10 01:14:21 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:14:21.736473 | orchestrator | 2026-03-10 01:14:21 | INFO  | Task 6e429885-ba08-416a-a18c-8ad3bb4f3a6a is in state STARTED 2026-03-10 01:14:21.736490 | orchestrator | 2026-03-10 01:14:21 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:24.779925 | orchestrator | 2026-03-10 01:14:24 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:14:24.780896 | orchestrator | 2026-03-10 01:14:24 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:14:24.783909 | orchestrator | 2026-03-10 01:14:24 | INFO  | Task 6e429885-ba08-416a-a18c-8ad3bb4f3a6a is in state STARTED 2026-03-10 01:14:24.784028 | orchestrator | 2026-03-10 01:14:24 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:27.835601 | orchestrator | 2026-03-10 01:14:27 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:14:27.837040 | orchestrator | 2026-03-10 01:14:27 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:14:27.838517 | orchestrator | 2026-03-10 01:14:27 | INFO  | Task 6e429885-ba08-416a-a18c-8ad3bb4f3a6a is in state STARTED 2026-03-10 01:14:27.838564 | orchestrator | 2026-03-10 01:14:27 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:30.882644 | orchestrator | 2026-03-10 01:14:30 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:14:30.890294 | orchestrator | 2026-03-10 01:14:30 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:14:30.891929 | orchestrator | 2026-03-10 01:14:30 | INFO  | Task 6e429885-ba08-416a-a18c-8ad3bb4f3a6a is in state STARTED 2026-03-10 01:14:30.891951 | orchestrator | 2026-03-10 01:14:30 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:33.943505 | orchestrator | 2026-03-10 01:14:33 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:14:33.946197 | orchestrator | 2026-03-10 01:14:33 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:14:33.948188 | orchestrator | 2026-03-10 01:14:33 | INFO  | Task 6e429885-ba08-416a-a18c-8ad3bb4f3a6a is in state STARTED 2026-03-10 01:14:33.948235 | orchestrator | 2026-03-10 01:14:33 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:37.014207 | orchestrator | 2026-03-10 01:14:37 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:14:37.015874 | orchestrator | 2026-03-10 01:14:37 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:14:37.017655 | orchestrator | 2026-03-10 01:14:37 | INFO  | Task 6e429885-ba08-416a-a18c-8ad3bb4f3a6a is in state STARTED 2026-03-10 01:14:37.017928 | orchestrator | 2026-03-10 01:14:37 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:40.066420 | orchestrator | 2026-03-10 01:14:40 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:14:40.068644 | orchestrator | 2026-03-10 01:14:40 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:14:40.070316 | orchestrator | 2026-03-10 01:14:40 | INFO  | Task 6e429885-ba08-416a-a18c-8ad3bb4f3a6a is in state STARTED 2026-03-10 01:14:40.070351 | orchestrator | 2026-03-10 01:14:40 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:43.110798 | orchestrator | 2026-03-10 01:14:43 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:14:43.111883 | orchestrator | 2026-03-10 01:14:43 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:14:43.113373 | orchestrator | 2026-03-10 01:14:43 | INFO  | Task 6e429885-ba08-416a-a18c-8ad3bb4f3a6a is in state STARTED 2026-03-10 01:14:43.113440 | orchestrator | 2026-03-10 01:14:43 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:46.154383 | orchestrator | 2026-03-10 01:14:46 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:14:46.156672 | orchestrator | 2026-03-10 01:14:46 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:14:46.160015 | orchestrator | 2026-03-10 01:14:46 | INFO  | Task 6e429885-ba08-416a-a18c-8ad3bb4f3a6a is in state STARTED 2026-03-10 01:14:46.160060 | orchestrator | 2026-03-10 01:14:46 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:49.198306 | orchestrator | 2026-03-10 01:14:49 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:14:49.200662 | orchestrator | 2026-03-10 01:14:49 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:14:49.202429 | orchestrator | 2026-03-10 01:14:49 | INFO  | Task 6e429885-ba08-416a-a18c-8ad3bb4f3a6a is in state STARTED 2026-03-10 01:14:49.202466 | orchestrator | 2026-03-10 01:14:49 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:52.245382 | orchestrator | 2026-03-10 01:14:52 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:14:52.247290 | orchestrator | 2026-03-10 01:14:52 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:14:52.250362 | orchestrator | 2026-03-10 01:14:52 | INFO  | Task 6e429885-ba08-416a-a18c-8ad3bb4f3a6a is in state SUCCESS 2026-03-10 01:14:52.254644 | orchestrator | 2026-03-10 01:14:52.254760 | orchestrator | 2026-03-10 01:14:52.254785 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 01:14:52.254806 | orchestrator | 2026-03-10 01:14:52.254826 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 01:14:52.254845 | orchestrator | Tuesday 10 March 2026 01:12:01 +0000 (0:00:00.476) 0:00:00.476 ********* 2026-03-10 01:14:52.254861 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:14:52.254874 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:14:52.254902 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:14:52.254914 | orchestrator | 2026-03-10 01:14:52.254925 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 01:14:52.254937 | orchestrator | Tuesday 10 March 2026 01:12:02 +0000 (0:00:00.472) 0:00:00.948 ********* 2026-03-10 01:14:52.254948 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-03-10 01:14:52.254959 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-03-10 01:14:52.254970 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-03-10 01:14:52.254981 | orchestrator | 2026-03-10 01:14:52.254991 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-03-10 01:14:52.255020 | orchestrator | 2026-03-10 01:14:52.255031 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-10 01:14:52.255041 | orchestrator | Tuesday 10 March 2026 01:12:02 +0000 (0:00:00.480) 0:00:01.428 ********* 2026-03-10 01:14:52.255052 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:14:52.255064 | orchestrator | 2026-03-10 01:14:52.255098 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-03-10 01:14:52.255115 | orchestrator | Tuesday 10 March 2026 01:12:03 +0000 (0:00:00.937) 0:00:02.366 ********* 2026-03-10 01:14:52.255131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-10 01:14:52.255173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-10 01:14:52.255187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-10 01:14:52.255200 | orchestrator | 2026-03-10 01:14:52.255212 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-03-10 01:14:52.255224 | orchestrator | Tuesday 10 March 2026 01:12:04 +0000 (0:00:01.098) 0:00:03.464 ********* 2026-03-10 01:14:52.255236 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-03-10 01:14:52.255277 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-03-10 01:14:52.255290 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-10 01:14:52.255302 | orchestrator | 2026-03-10 01:14:52.255323 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-10 01:14:52.255343 | orchestrator | Tuesday 10 March 2026 01:12:05 +0000 (0:00:01.118) 0:00:04.582 ********* 2026-03-10 01:14:52.255362 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:14:52.255382 | orchestrator | 2026-03-10 01:14:52.255400 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-03-10 01:14:52.255421 | orchestrator | Tuesday 10 March 2026 01:12:06 +0000 (0:00:00.900) 0:00:05.483 ********* 2026-03-10 01:14:52.255480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-10 01:14:52.255506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-10 01:14:52.255542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-10 01:14:52.255563 | orchestrator | 2026-03-10 01:14:52.255582 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-03-10 01:14:52.255594 | orchestrator | Tuesday 10 March 2026 01:12:08 +0000 (0:00:01.469) 0:00:06.953 ********* 2026-03-10 01:14:52.255605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-10 01:14:52.255616 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:52.255628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-10 01:14:52.255639 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:52.255669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-10 01:14:52.255689 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:52.255705 | orchestrator | 2026-03-10 01:14:52.255730 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-03-10 01:14:52.255750 | orchestrator | Tuesday 10 March 2026 01:12:08 +0000 (0:00:00.422) 0:00:07.375 ********* 2026-03-10 01:14:52.255770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-10 01:14:52.255800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-10 01:14:52.255812 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:52.255823 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:52.255834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-10 01:14:52.255846 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:52.255857 | orchestrator | 2026-03-10 01:14:52.255867 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-03-10 01:14:52.255879 | orchestrator | Tuesday 10 March 2026 01:12:09 +0000 (0:00:00.921) 0:00:08.297 ********* 2026-03-10 01:14:52.255890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-10 01:14:52.255909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-10 01:14:52.255927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-10 01:14:52.255947 | orchestrator | 2026-03-10 01:14:52.255959 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-03-10 01:14:52.255969 | orchestrator | Tuesday 10 March 2026 01:12:10 +0000 (0:00:01.302) 0:00:09.599 ********* 2026-03-10 01:14:52.255981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-10 01:14:52.255995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-10 01:14:52.256015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-10 01:14:52.256033 | orchestrator | 2026-03-10 01:14:52.256053 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-03-10 01:14:52.256072 | orchestrator | Tuesday 10 March 2026 01:12:12 +0000 (0:00:01.428) 0:00:11.027 ********* 2026-03-10 01:14:52.256112 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:52.256124 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:52.256135 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:52.256146 | orchestrator | 2026-03-10 01:14:52.256160 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-03-10 01:14:52.256178 | orchestrator | Tuesday 10 March 2026 01:12:12 +0000 (0:00:00.559) 0:00:11.586 ********* 2026-03-10 01:14:52.256196 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-10 01:14:52.256214 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-10 01:14:52.256232 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-10 01:14:52.256247 | orchestrator | 2026-03-10 01:14:52.256264 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-03-10 01:14:52.256282 | orchestrator | Tuesday 10 March 2026 01:12:14 +0000 (0:00:01.387) 0:00:12.974 ********* 2026-03-10 01:14:52.256301 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-10 01:14:52.256345 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-10 01:14:52.256366 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-10 01:14:52.256385 | orchestrator | 2026-03-10 01:14:52.256403 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-03-10 01:14:52.256421 | orchestrator | Tuesday 10 March 2026 01:12:15 +0000 (0:00:01.366) 0:00:14.340 ********* 2026-03-10 01:14:52.256448 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-10 01:14:52.256468 | orchestrator | 2026-03-10 01:14:52.256487 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-03-10 01:14:52.256506 | orchestrator | Tuesday 10 March 2026 01:12:16 +0000 (0:00:00.792) 0:00:15.132 ********* 2026-03-10 01:14:52.256525 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-03-10 01:14:52.256544 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-03-10 01:14:52.256562 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:14:52.256580 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:14:52.256599 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:14:52.256616 | orchestrator | 2026-03-10 01:14:52.256634 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-03-10 01:14:52.256652 | orchestrator | Tuesday 10 March 2026 01:12:17 +0000 (0:00:00.702) 0:00:15.835 ********* 2026-03-10 01:14:52.256671 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:52.256689 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:52.256707 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:52.256726 | orchestrator | 2026-03-10 01:14:52.256744 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-03-10 01:14:52.256763 | orchestrator | Tuesday 10 March 2026 01:12:17 +0000 (0:00:00.583) 0:00:16.419 ********* 2026-03-10 01:14:52.256781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1096987, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.8477077, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.256802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1096987, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.8477077, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.256844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1096987, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.8477077, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.256896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1097245, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9049995, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.256939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1097245, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9049995, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.256961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1097245, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9049995, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.256981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1097098, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.872999, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.257001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1097098, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.872999, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.257020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1097098, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.872999, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.257040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1097249, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9111428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.257109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1097249, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9111428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.257139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1097249, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9111428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.257160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1097148, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.8829355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.257179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1097148, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.8829355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.257198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1097148, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.8829355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.257217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1097172, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9031248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.257258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1097172, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9031248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.257296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1097172, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9031248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.257316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1096983, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.8458657, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.257336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1096983, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.8458657, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.257351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1096983, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.8458657, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.257362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1096996, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.8706279, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.257384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1096996, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.8706279, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.257403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1096996, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 12026-03-10 01:14:52 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:52.258271 | orchestrator | 773101832.8706279, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1097101, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.872999, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1097101, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.872999, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1097101, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.872999, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1097159, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.88484, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1097159, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.88484, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1097159, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.88484, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1097242, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9049995, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1097242, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9049995, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1097242, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9049995, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1097087, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.8726547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1097087, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.8726547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1097169, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.8879993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1097087, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.8726547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1097169, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.8879993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1097169, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.8879993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1097152, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.883999, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1097152, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.883999, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1097152, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.883999, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1097116, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.881999, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1097116, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.881999, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1097116, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.881999, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1097112, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.874999, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1097160, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.8870304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1097112, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.874999, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1097112, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.874999, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1097160, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.8870304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1097103, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.8741982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1097160, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.8870304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1097103, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.8741982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1097103, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.8741982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1097239, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.904155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1097239, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.904155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1097239, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.904155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1097452, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.965949, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1097452, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.965949, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1097452, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.965949, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1097289, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.930391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1097289, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.930391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1097275, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.915131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1097289, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.930391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1097275, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.915131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1097383, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1097275, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.915131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1097383, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1097266, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9122243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1097383, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1097266, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9122243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1097420, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9510002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1097266, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9122243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1097420, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9510002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1097384, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9491944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1097420, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9510002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1097384, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9491944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1097422, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9510002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1097384, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9491944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1097422, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9510002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1097448, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9620006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1097422, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9510002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1097448, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9620006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1097419, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9500003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1097448, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9620006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1097419, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9500003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1097379, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1097419, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9500003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1097379, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1097285, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9171798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1097379, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.258998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1097377, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.931, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.259009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1097285, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9171798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.259019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1097285, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9171798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.259031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1097278, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9161851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.259039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1097377, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.931, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.259046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1097377, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.931, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.259054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1097381, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.259062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1097278, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9161851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.259102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1097278, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9161851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.259116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1097429, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9610004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.259124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1097381, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.259132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1097381, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.259139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1097424, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9530003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.259147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1097429, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9610004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.259162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1097429, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9610004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.259176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1097269, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9135773, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.259183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1097424, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9530003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.259191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1097424, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9530003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.259199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1050517, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9139996, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.259206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1097269, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9135773, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.259222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1097418, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9497554, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.259239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1097269, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9135773, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.259246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1050517, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9139996, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.259253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1050517, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9139996, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.259260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1097423, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9521635, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.259267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1097418, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9497554, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.259280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1097418, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9497554, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.259296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1097423, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9521635, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.259303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1097423, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773101832.9521635, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:14:52.259310 | orchestrator | 2026-03-10 01:14:52.259319 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-03-10 01:14:52.259326 | orchestrator | Tuesday 10 March 2026 01:12:58 +0000 (0:00:41.009) 0:00:57.429 ********* 2026-03-10 01:14:52.259333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-10 01:14:52.259339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-10 01:14:52.259346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-10 01:14:52.259358 | orchestrator | 2026-03-10 01:14:52.259364 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-03-10 01:14:52.259370 | orchestrator | Tuesday 10 March 2026 01:13:00 +0000 (0:00:01.578) 0:00:59.008 ********* 2026-03-10 01:14:52.259377 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:52.259385 | orchestrator | 2026-03-10 01:14:52.259391 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-03-10 01:14:52.259398 | orchestrator | Tuesday 10 March 2026 01:13:03 +0000 (0:00:03.037) 0:01:02.046 ********* 2026-03-10 01:14:52.259464 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:52.259474 | orchestrator | 2026-03-10 01:14:52.259480 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-10 01:14:52.259487 | orchestrator | Tuesday 10 March 2026 01:13:06 +0000 (0:00:02.621) 0:01:04.667 ********* 2026-03-10 01:14:52.259493 | orchestrator | 2026-03-10 01:14:52.259503 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-10 01:14:52.259510 | orchestrator | Tuesday 10 March 2026 01:13:06 +0000 (0:00:00.103) 0:01:04.771 ********* 2026-03-10 01:14:52.259515 | orchestrator | 2026-03-10 01:14:52.259521 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-10 01:14:52.259527 | orchestrator | Tuesday 10 March 2026 01:13:06 +0000 (0:00:00.067) 0:01:04.838 ********* 2026-03-10 01:14:52.259534 | orchestrator | 2026-03-10 01:14:52.259540 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-03-10 01:14:52.259546 | orchestrator | Tuesday 10 March 2026 01:13:06 +0000 (0:00:00.281) 0:01:05.120 ********* 2026-03-10 01:14:52.259552 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:52.259558 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:52.259565 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:52.259570 | orchestrator | 2026-03-10 01:14:52.259578 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-03-10 01:14:52.259584 | orchestrator | Tuesday 10 March 2026 01:13:08 +0000 (0:00:01.852) 0:01:06.973 ********* 2026-03-10 01:14:52.259590 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:52.259596 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:52.259602 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-03-10 01:14:52.259609 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-03-10 01:14:52.259615 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-03-10 01:14:52.259621 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-03-10 01:14:52.259628 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (8 retries left). 2026-03-10 01:14:52.259634 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:14:52.259641 | orchestrator | 2026-03-10 01:14:52.259647 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-03-10 01:14:52.259653 | orchestrator | Tuesday 10 March 2026 01:14:11 +0000 (0:01:03.454) 0:02:10.427 ********* 2026-03-10 01:14:52.259659 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:52.259665 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:14:52.259671 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:14:52.259678 | orchestrator | 2026-03-10 01:14:52.259684 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-03-10 01:14:52.259690 | orchestrator | Tuesday 10 March 2026 01:14:44 +0000 (0:00:32.414) 0:02:42.842 ********* 2026-03-10 01:14:52.259697 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:14:52.259703 | orchestrator | 2026-03-10 01:14:52.259709 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-03-10 01:14:52.259716 | orchestrator | Tuesday 10 March 2026 01:14:46 +0000 (0:00:02.404) 0:02:45.246 ********* 2026-03-10 01:14:52.259731 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:52.259737 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:52.259742 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:52.259748 | orchestrator | 2026-03-10 01:14:52.259755 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-03-10 01:14:52.259761 | orchestrator | Tuesday 10 March 2026 01:14:47 +0000 (0:00:00.599) 0:02:45.845 ********* 2026-03-10 01:14:52.259768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-03-10 01:14:52.259776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-03-10 01:14:52.259783 | orchestrator | 2026-03-10 01:14:52.259789 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-03-10 01:14:52.259795 | orchestrator | Tuesday 10 March 2026 01:14:49 +0000 (0:00:02.596) 0:02:48.442 ********* 2026-03-10 01:14:52.259801 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:52.259808 | orchestrator | 2026-03-10 01:14:52.259814 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:14:52.259822 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-10 01:14:52.259830 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-10 01:14:52.259837 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-10 01:14:52.259843 | orchestrator | 2026-03-10 01:14:52.259849 | orchestrator | 2026-03-10 01:14:52.259856 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:14:52.259871 | orchestrator | Tuesday 10 March 2026 01:14:50 +0000 (0:00:00.301) 0:02:48.743 ********* 2026-03-10 01:14:52.259877 | orchestrator | =============================================================================== 2026-03-10 01:14:52.259882 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 63.45s 2026-03-10 01:14:52.259893 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 41.01s 2026-03-10 01:14:52.259900 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 32.41s 2026-03-10 01:14:52.259907 | orchestrator | grafana : Creating grafana database ------------------------------------- 3.04s 2026-03-10 01:14:52.259914 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.62s 2026-03-10 01:14:52.259920 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.60s 2026-03-10 01:14:52.259927 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.40s 2026-03-10 01:14:52.259935 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.85s 2026-03-10 01:14:52.259941 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.58s 2026-03-10 01:14:52.259948 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.47s 2026-03-10 01:14:52.259954 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.43s 2026-03-10 01:14:52.259960 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.39s 2026-03-10 01:14:52.259967 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.37s 2026-03-10 01:14:52.259973 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.30s 2026-03-10 01:14:52.259988 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 1.12s 2026-03-10 01:14:52.259996 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.10s 2026-03-10 01:14:52.260003 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.94s 2026-03-10 01:14:52.260010 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.92s 2026-03-10 01:14:52.260017 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.90s 2026-03-10 01:14:52.260024 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.79s 2026-03-10 01:14:55.300294 | orchestrator | 2026-03-10 01:14:55 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:14:55.302218 | orchestrator | 2026-03-10 01:14:55 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:14:55.302282 | orchestrator | 2026-03-10 01:14:55 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:58.342825 | orchestrator | 2026-03-10 01:14:58 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:14:58.344961 | orchestrator | 2026-03-10 01:14:58 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:14:58.345681 | orchestrator | 2026-03-10 01:14:58 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:01.387671 | orchestrator | 2026-03-10 01:15:01 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:15:01.390538 | orchestrator | 2026-03-10 01:15:01 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:15:01.390615 | orchestrator | 2026-03-10 01:15:01 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:04.441807 | orchestrator | 2026-03-10 01:15:04 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:15:04.442852 | orchestrator | 2026-03-10 01:15:04 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:15:04.443049 | orchestrator | 2026-03-10 01:15:04 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:07.485434 | orchestrator | 2026-03-10 01:15:07 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:15:07.487352 | orchestrator | 2026-03-10 01:15:07 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:15:07.488021 | orchestrator | 2026-03-10 01:15:07 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:10.541748 | orchestrator | 2026-03-10 01:15:10 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:15:10.543378 | orchestrator | 2026-03-10 01:15:10 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:15:10.543449 | orchestrator | 2026-03-10 01:15:10 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:13.580410 | orchestrator | 2026-03-10 01:15:13 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:15:13.581787 | orchestrator | 2026-03-10 01:15:13 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:15:13.581823 | orchestrator | 2026-03-10 01:15:13 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:16.619764 | orchestrator | 2026-03-10 01:15:16 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:15:16.621854 | orchestrator | 2026-03-10 01:15:16 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:15:16.621913 | orchestrator | 2026-03-10 01:15:16 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:19.667465 | orchestrator | 2026-03-10 01:15:19 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:15:19.668965 | orchestrator | 2026-03-10 01:15:19 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:15:19.669013 | orchestrator | 2026-03-10 01:15:19 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:22.711912 | orchestrator | 2026-03-10 01:15:22 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:15:22.714392 | orchestrator | 2026-03-10 01:15:22 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:15:22.714457 | orchestrator | 2026-03-10 01:15:22 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:25.754687 | orchestrator | 2026-03-10 01:15:25 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:15:25.755620 | orchestrator | 2026-03-10 01:15:25 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:15:25.755646 | orchestrator | 2026-03-10 01:15:25 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:28.793481 | orchestrator | 2026-03-10 01:15:28 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:15:28.794791 | orchestrator | 2026-03-10 01:15:28 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:15:28.794825 | orchestrator | 2026-03-10 01:15:28 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:31.844347 | orchestrator | 2026-03-10 01:15:31 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:15:31.845605 | orchestrator | 2026-03-10 01:15:31 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:15:31.845664 | orchestrator | 2026-03-10 01:15:31 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:34.887299 | orchestrator | 2026-03-10 01:15:34 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:15:34.889345 | orchestrator | 2026-03-10 01:15:34 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:15:34.889506 | orchestrator | 2026-03-10 01:15:34 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:37.940105 | orchestrator | 2026-03-10 01:15:37 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:15:37.941875 | orchestrator | 2026-03-10 01:15:37 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:15:37.941920 | orchestrator | 2026-03-10 01:15:37 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:40.996154 | orchestrator | 2026-03-10 01:15:40 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:15:40.997750 | orchestrator | 2026-03-10 01:15:40 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:15:40.997778 | orchestrator | 2026-03-10 01:15:40 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:44.045539 | orchestrator | 2026-03-10 01:15:44 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:15:44.047188 | orchestrator | 2026-03-10 01:15:44 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:15:44.047227 | orchestrator | 2026-03-10 01:15:44 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:47.084161 | orchestrator | 2026-03-10 01:15:47 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:15:47.084807 | orchestrator | 2026-03-10 01:15:47 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:15:47.085527 | orchestrator | 2026-03-10 01:15:47 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:50.134490 | orchestrator | 2026-03-10 01:15:50 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:15:50.136823 | orchestrator | 2026-03-10 01:15:50 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:15:50.136968 | orchestrator | 2026-03-10 01:15:50 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:53.182147 | orchestrator | 2026-03-10 01:15:53 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:15:53.184414 | orchestrator | 2026-03-10 01:15:53 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:15:53.184475 | orchestrator | 2026-03-10 01:15:53 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:56.230332 | orchestrator | 2026-03-10 01:15:56 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state STARTED 2026-03-10 01:15:56.231833 | orchestrator | 2026-03-10 01:15:56 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:15:56.231912 | orchestrator | 2026-03-10 01:15:56 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:59.279399 | orchestrator | 2026-03-10 01:15:59 | INFO  | Task de7f415b-d412-4b25-bdb9-8bd9c5948ac2 is in state SUCCESS 2026-03-10 01:15:59.282426 | orchestrator | 2026-03-10 01:15:59 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:15:59.282516 | orchestrator | 2026-03-10 01:15:59 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:02.323006 | orchestrator | 2026-03-10 01:16:02 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:16:02.323150 | orchestrator | 2026-03-10 01:16:02 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:05.371238 | orchestrator | 2026-03-10 01:16:05 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:16:05.371373 | orchestrator | 2026-03-10 01:16:05 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:08.411230 | orchestrator | 2026-03-10 01:16:08 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:16:08.411336 | orchestrator | 2026-03-10 01:16:08 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:11.456418 | orchestrator | 2026-03-10 01:16:11 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:16:11.456511 | orchestrator | 2026-03-10 01:16:11 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:14.503156 | orchestrator | 2026-03-10 01:16:14 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:16:14.503270 | orchestrator | 2026-03-10 01:16:14 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:17.547396 | orchestrator | 2026-03-10 01:16:17 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:16:17.547514 | orchestrator | 2026-03-10 01:16:17 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:20.590822 | orchestrator | 2026-03-10 01:16:20 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:16:20.590897 | orchestrator | 2026-03-10 01:16:20 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:23.628356 | orchestrator | 2026-03-10 01:16:23 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:16:23.628460 | orchestrator | 2026-03-10 01:16:23 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:26.672703 | orchestrator | 2026-03-10 01:16:26 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:16:26.672809 | orchestrator | 2026-03-10 01:16:26 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:29.714497 | orchestrator | 2026-03-10 01:16:29 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:16:29.714595 | orchestrator | 2026-03-10 01:16:29 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:32.758480 | orchestrator | 2026-03-10 01:16:32 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:16:32.759516 | orchestrator | 2026-03-10 01:16:32 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:35.802122 | orchestrator | 2026-03-10 01:16:35 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:16:35.802217 | orchestrator | 2026-03-10 01:16:35 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:38.839691 | orchestrator | 2026-03-10 01:16:38 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:16:38.839819 | orchestrator | 2026-03-10 01:16:38 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:41.878147 | orchestrator | 2026-03-10 01:16:41 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:16:41.878268 | orchestrator | 2026-03-10 01:16:41 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:44.919215 | orchestrator | 2026-03-10 01:16:44 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:16:44.919307 | orchestrator | 2026-03-10 01:16:44 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:47.960964 | orchestrator | 2026-03-10 01:16:47 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:16:47.961152 | orchestrator | 2026-03-10 01:16:47 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:51.007590 | orchestrator | 2026-03-10 01:16:51 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:16:51.007705 | orchestrator | 2026-03-10 01:16:51 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:54.065057 | orchestrator | 2026-03-10 01:16:54 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:16:54.065149 | orchestrator | 2026-03-10 01:16:54 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:57.107901 | orchestrator | 2026-03-10 01:16:57 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:16:57.108167 | orchestrator | 2026-03-10 01:16:57 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:00.143444 | orchestrator | 2026-03-10 01:17:00 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:17:00.143539 | orchestrator | 2026-03-10 01:17:00 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:03.182508 | orchestrator | 2026-03-10 01:17:03 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:17:03.184757 | orchestrator | 2026-03-10 01:17:03 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:06.220696 | orchestrator | 2026-03-10 01:17:06 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:17:06.220773 | orchestrator | 2026-03-10 01:17:06 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:09.260467 | orchestrator | 2026-03-10 01:17:09 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:17:09.260590 | orchestrator | 2026-03-10 01:17:09 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:12.305857 | orchestrator | 2026-03-10 01:17:12 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:17:12.306071 | orchestrator | 2026-03-10 01:17:12 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:15.354385 | orchestrator | 2026-03-10 01:17:15 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:17:15.354487 | orchestrator | 2026-03-10 01:17:15 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:18.397446 | orchestrator | 2026-03-10 01:17:18 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:17:18.397556 | orchestrator | 2026-03-10 01:17:18 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:21.436319 | orchestrator | 2026-03-10 01:17:21 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:17:21.437219 | orchestrator | 2026-03-10 01:17:21 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:24.488416 | orchestrator | 2026-03-10 01:17:24 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:17:24.488494 | orchestrator | 2026-03-10 01:17:24 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:27.532494 | orchestrator | 2026-03-10 01:17:27 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:17:27.532603 | orchestrator | 2026-03-10 01:17:27 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:30.578363 | orchestrator | 2026-03-10 01:17:30 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:17:30.578466 | orchestrator | 2026-03-10 01:17:30 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:33.614694 | orchestrator | 2026-03-10 01:17:33 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:17:33.614812 | orchestrator | 2026-03-10 01:17:33 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:36.659330 | orchestrator | 2026-03-10 01:17:36 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:17:36.659461 | orchestrator | 2026-03-10 01:17:36 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:39.710204 | orchestrator | 2026-03-10 01:17:39 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:17:39.710321 | orchestrator | 2026-03-10 01:17:39 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:42.758160 | orchestrator | 2026-03-10 01:17:42 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:17:42.758267 | orchestrator | 2026-03-10 01:17:42 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:45.799294 | orchestrator | 2026-03-10 01:17:45 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:17:45.799490 | orchestrator | 2026-03-10 01:17:45 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:48.853677 | orchestrator | 2026-03-10 01:17:48 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:17:48.853761 | orchestrator | 2026-03-10 01:17:48 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:51.894357 | orchestrator | 2026-03-10 01:17:51 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:17:51.894467 | orchestrator | 2026-03-10 01:17:51 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:54.935177 | orchestrator | 2026-03-10 01:17:54 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:17:54.935305 | orchestrator | 2026-03-10 01:17:54 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:57.980762 | orchestrator | 2026-03-10 01:17:57 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:17:57.980900 | orchestrator | 2026-03-10 01:17:57 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:01.037870 | orchestrator | 2026-03-10 01:18:01 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:18:01.037985 | orchestrator | 2026-03-10 01:18:01 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:04.091166 | orchestrator | 2026-03-10 01:18:04 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:18:04.091280 | orchestrator | 2026-03-10 01:18:04 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:07.135278 | orchestrator | 2026-03-10 01:18:07 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:18:07.135391 | orchestrator | 2026-03-10 01:18:07 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:10.184423 | orchestrator | 2026-03-10 01:18:10 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:18:10.184528 | orchestrator | 2026-03-10 01:18:10 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:13.235513 | orchestrator | 2026-03-10 01:18:13 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:18:13.235637 | orchestrator | 2026-03-10 01:18:13 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:16.282642 | orchestrator | 2026-03-10 01:18:16 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:18:16.282746 | orchestrator | 2026-03-10 01:18:16 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:19.323787 | orchestrator | 2026-03-10 01:18:19 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:18:19.323884 | orchestrator | 2026-03-10 01:18:19 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:22.364760 | orchestrator | 2026-03-10 01:18:22 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:18:22.364875 | orchestrator | 2026-03-10 01:18:22 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:25.412973 | orchestrator | 2026-03-10 01:18:25 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:18:25.413067 | orchestrator | 2026-03-10 01:18:25 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:28.468419 | orchestrator | 2026-03-10 01:18:28 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:18:28.468557 | orchestrator | 2026-03-10 01:18:28 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:31.507036 | orchestrator | 2026-03-10 01:18:31 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:18:31.507113 | orchestrator | 2026-03-10 01:18:31 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:34.544454 | orchestrator | 2026-03-10 01:18:34 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:18:34.544531 | orchestrator | 2026-03-10 01:18:34 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:37.590488 | orchestrator | 2026-03-10 01:18:37 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:18:37.590581 | orchestrator | 2026-03-10 01:18:37 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:40.633950 | orchestrator | 2026-03-10 01:18:40 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:18:40.634086 | orchestrator | 2026-03-10 01:18:40 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:43.672805 | orchestrator | 2026-03-10 01:18:43 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:18:43.673017 | orchestrator | 2026-03-10 01:18:43 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:46.711077 | orchestrator | 2026-03-10 01:18:46 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:18:46.711188 | orchestrator | 2026-03-10 01:18:46 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:49.762517 | orchestrator | 2026-03-10 01:18:49 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:18:49.762602 | orchestrator | 2026-03-10 01:18:49 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:52.801601 | orchestrator | 2026-03-10 01:18:52 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:18:52.801710 | orchestrator | 2026-03-10 01:18:52 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:55.854158 | orchestrator | 2026-03-10 01:18:55 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:18:55.854239 | orchestrator | 2026-03-10 01:18:55 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:58.886743 | orchestrator | 2026-03-10 01:18:58 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:18:58.886837 | orchestrator | 2026-03-10 01:18:58 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:19:01.934607 | orchestrator | 2026-03-10 01:19:01 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:19:01.934703 | orchestrator | 2026-03-10 01:19:01 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:19:04.990508 | orchestrator | 2026-03-10 01:19:04 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:19:04.990685 | orchestrator | 2026-03-10 01:19:04 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:19:08.036304 | orchestrator | 2026-03-10 01:19:08 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state STARTED 2026-03-10 01:19:08.036390 | orchestrator | 2026-03-10 01:19:08 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:19:11.080989 | orchestrator | 2026-03-10 01:19:11 | INFO  | Task 8c49c5c3-395e-4cde-b04c-77b6d27f561b is in state SUCCESS 2026-03-10 01:19:11.081083 | orchestrator | 2026-03-10 01:19:11 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:19:11.082603 | orchestrator | 2026-03-10 01:19:11.082675 | orchestrator | 2026-03-10 01:19:11.082685 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-03-10 01:19:11.082693 | orchestrator | 2026-03-10 01:19:11.082701 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-03-10 01:19:11.082709 | orchestrator | Tuesday 10 March 2026 01:08:56 +0000 (0:00:00.240) 0:00:00.241 ********* 2026-03-10 01:19:11.082717 | orchestrator | changed: [localhost] 2026-03-10 01:19:11.082758 | orchestrator | 2026-03-10 01:19:11.082781 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-03-10 01:19:11.082789 | orchestrator | Tuesday 10 March 2026 01:08:58 +0000 (0:00:02.144) 0:00:02.386 ********* 2026-03-10 01:19:11.082797 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2026-03-10 01:19:11.082819 | orchestrator | 2026-03-10 01:19:11.082827 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-10 01:19:11.082835 | orchestrator | 2026-03-10 01:19:11.082842 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-10 01:19:11.082849 | orchestrator | 2026-03-10 01:19:11.082884 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-10 01:19:11.082897 | orchestrator | 2026-03-10 01:19:11.082909 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-10 01:19:11.082954 | orchestrator | 2026-03-10 01:19:11.082977 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-10 01:19:11.082985 | orchestrator | 2026-03-10 01:19:11.082992 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-10 01:19:11.082999 | orchestrator | 2026-03-10 01:19:11.083007 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-10 01:19:11.083014 | orchestrator | 2026-03-10 01:19:11.083021 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-10 01:19:11.083029 | orchestrator | changed: [localhost] 2026-03-10 01:19:11.083044 | orchestrator | 2026-03-10 01:19:11.083052 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-03-10 01:19:11.083059 | orchestrator | Tuesday 10 March 2026 01:15:09 +0000 (0:06:10.479) 0:06:12.865 ********* 2026-03-10 01:19:11.083067 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2026-03-10 01:19:11.083074 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (2 retries left). 2026-03-10 01:19:11.083081 | orchestrator | changed: [localhost] 2026-03-10 01:19:11.083089 | orchestrator | 2026-03-10 01:19:11.083174 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 01:19:11.083189 | orchestrator | 2026-03-10 01:19:11.083203 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 01:19:11.083216 | orchestrator | Tuesday 10 March 2026 01:15:56 +0000 (0:00:47.900) 0:07:00.766 ********* 2026-03-10 01:19:11.083225 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:19:11.083233 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:19:11.083242 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:19:11.083250 | orchestrator | 2026-03-10 01:19:11.083259 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 01:19:11.083280 | orchestrator | Tuesday 10 March 2026 01:15:57 +0000 (0:00:00.331) 0:07:01.097 ********* 2026-03-10 01:19:11.083289 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-03-10 01:19:11.083297 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-03-10 01:19:11.083307 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-03-10 01:19:11.083316 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-03-10 01:19:11.083324 | orchestrator | 2026-03-10 01:19:11.083333 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-03-10 01:19:11.083345 | orchestrator | skipping: no hosts matched 2026-03-10 01:19:11.083358 | orchestrator | 2026-03-10 01:19:11.083370 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:19:11.083383 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:19:11.083398 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:19:11.083411 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:19:11.083424 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:19:11.083437 | orchestrator | 2026-03-10 01:19:11.083450 | orchestrator | 2026-03-10 01:19:11.083463 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:19:11.083476 | orchestrator | Tuesday 10 March 2026 01:15:57 +0000 (0:00:00.608) 0:07:01.706 ********* 2026-03-10 01:19:11.083488 | orchestrator | =============================================================================== 2026-03-10 01:19:11.083501 | orchestrator | Download ironic-agent initramfs --------------------------------------- 370.48s 2026-03-10 01:19:11.083513 | orchestrator | Download ironic-agent kernel ------------------------------------------- 47.90s 2026-03-10 01:19:11.083562 | orchestrator | Ensure the destination directory exists --------------------------------- 2.14s 2026-03-10 01:19:11.083576 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.61s 2026-03-10 01:19:11.083589 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-03-10 01:19:11.083602 | orchestrator | 2026-03-10 01:19:11.083614 | orchestrator | 2026-03-10 01:19:11.083626 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 01:19:11.083639 | orchestrator | 2026-03-10 01:19:11.083650 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 01:19:11.083662 | orchestrator | Tuesday 10 March 2026 01:14:13 +0000 (0:00:00.287) 0:00:00.287 ********* 2026-03-10 01:19:11.083674 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:19:11.083705 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:19:11.083719 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:19:11.083732 | orchestrator | 2026-03-10 01:19:11.083745 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 01:19:11.083760 | orchestrator | Tuesday 10 March 2026 01:14:13 +0000 (0:00:00.551) 0:00:00.838 ********* 2026-03-10 01:19:11.083774 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-10 01:19:11.083787 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-10 01:19:11.083801 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-10 01:19:11.083814 | orchestrator | 2026-03-10 01:19:11.083826 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-10 01:19:11.083840 | orchestrator | 2026-03-10 01:19:11.083847 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-10 01:19:11.083875 | orchestrator | Tuesday 10 March 2026 01:14:14 +0000 (0:00:00.755) 0:00:01.594 ********* 2026-03-10 01:19:11.083883 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:19:11.083891 | orchestrator | 2026-03-10 01:19:11.083898 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-03-10 01:19:11.083905 | orchestrator | Tuesday 10 March 2026 01:14:15 +0000 (0:00:00.916) 0:00:02.510 ********* 2026-03-10 01:19:11.083913 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-10 01:19:11.083920 | orchestrator | 2026-03-10 01:19:11.083927 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-03-10 01:19:11.083935 | orchestrator | Tuesday 10 March 2026 01:14:19 +0000 (0:00:03.603) 0:00:06.114 ********* 2026-03-10 01:19:11.083942 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-10 01:19:11.083950 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-10 01:19:11.083957 | orchestrator | 2026-03-10 01:19:11.083964 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-10 01:19:11.083971 | orchestrator | Tuesday 10 March 2026 01:14:26 +0000 (0:00:06.934) 0:00:13.049 ********* 2026-03-10 01:19:11.083978 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-10 01:19:11.083986 | orchestrator | 2026-03-10 01:19:11.083993 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-10 01:19:11.084000 | orchestrator | Tuesday 10 March 2026 01:14:29 +0000 (0:00:03.636) 0:00:16.686 ********* 2026-03-10 01:19:11.084007 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-10 01:19:11.084014 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-10 01:19:11.084022 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-10 01:19:11.084029 | orchestrator | 2026-03-10 01:19:11.084036 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-10 01:19:11.084043 | orchestrator | Tuesday 10 March 2026 01:14:38 +0000 (0:00:08.957) 0:00:25.644 ********* 2026-03-10 01:19:11.084058 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-10 01:19:11.084066 | orchestrator | 2026-03-10 01:19:11.084073 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-03-10 01:19:11.084089 | orchestrator | Tuesday 10 March 2026 01:14:42 +0000 (0:00:03.576) 0:00:29.220 ********* 2026-03-10 01:19:11.084096 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-10 01:19:11.084104 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-10 01:19:11.084111 | orchestrator | 2026-03-10 01:19:11.084118 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-10 01:19:11.084125 | orchestrator | Tuesday 10 March 2026 01:14:50 +0000 (0:00:08.068) 0:00:37.289 ********* 2026-03-10 01:19:11.084132 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-10 01:19:11.084139 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-10 01:19:11.084146 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-10 01:19:11.084154 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-10 01:19:11.084161 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-10 01:19:11.084168 | orchestrator | 2026-03-10 01:19:11.084175 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-10 01:19:11.084182 | orchestrator | Tuesday 10 March 2026 01:15:07 +0000 (0:00:17.013) 0:00:54.302 ********* 2026-03-10 01:19:11.084189 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:19:11.084196 | orchestrator | 2026-03-10 01:19:11.084204 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-10 01:19:11.084211 | orchestrator | Tuesday 10 March 2026 01:15:07 +0000 (0:00:00.570) 0:00:54.873 ********* 2026-03-10 01:19:11.084218 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:19:11.084226 | orchestrator | 2026-03-10 01:19:11.084233 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-10 01:19:11.084240 | orchestrator | Tuesday 10 March 2026 01:15:13 +0000 (0:00:05.577) 0:01:00.450 ********* 2026-03-10 01:19:11.084247 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:19:11.084254 | orchestrator | 2026-03-10 01:19:11.084261 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-10 01:19:11.084269 | orchestrator | Tuesday 10 March 2026 01:15:18 +0000 (0:00:04.711) 0:01:05.162 ********* 2026-03-10 01:19:11.084276 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:19:11.084283 | orchestrator | 2026-03-10 01:19:11.084290 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-10 01:19:11.084297 | orchestrator | Tuesday 10 March 2026 01:15:21 +0000 (0:00:03.574) 0:01:08.737 ********* 2026-03-10 01:19:11.084305 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-10 01:19:11.084318 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-10 01:19:11.084325 | orchestrator | 2026-03-10 01:19:11.084332 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-10 01:19:11.084340 | orchestrator | Tuesday 10 March 2026 01:15:32 +0000 (0:00:10.405) 0:01:19.142 ********* 2026-03-10 01:19:11.084347 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-10 01:19:11.084355 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-10 01:19:11.084363 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-10 01:19:11.084370 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-10 01:19:11.084378 | orchestrator | 2026-03-10 01:19:11.084385 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-10 01:19:11.084393 | orchestrator | Tuesday 10 March 2026 01:15:51 +0000 (0:00:18.958) 0:01:38.101 ********* 2026-03-10 01:19:11.084406 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:19:11.084413 | orchestrator | 2026-03-10 01:19:11.084420 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-10 01:19:11.084428 | orchestrator | Tuesday 10 March 2026 01:15:57 +0000 (0:00:06.409) 0:01:44.510 ********* 2026-03-10 01:19:11.084435 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:19:11.084442 | orchestrator | 2026-03-10 01:19:11.084458 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-10 01:19:11.084466 | orchestrator | Tuesday 10 March 2026 01:16:03 +0000 (0:00:05.939) 0:01:50.450 ********* 2026-03-10 01:19:11.084473 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:19:11.084481 | orchestrator | 2026-03-10 01:19:11.084488 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-10 01:19:11.084495 | orchestrator | Tuesday 10 March 2026 01:16:03 +0000 (0:00:00.295) 0:01:50.746 ********* 2026-03-10 01:19:11.084503 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:19:11.084510 | orchestrator | 2026-03-10 01:19:11.084517 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-10 01:19:11.084524 | orchestrator | Tuesday 10 March 2026 01:16:07 +0000 (0:00:03.802) 0:01:54.549 ********* 2026-03-10 01:19:11.084532 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:19:11.084539 | orchestrator | 2026-03-10 01:19:11.084546 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-10 01:19:11.084554 | orchestrator | Tuesday 10 March 2026 01:16:08 +0000 (0:00:01.247) 0:01:55.797 ********* 2026-03-10 01:19:11.084561 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:19:11.084568 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:19:11.084579 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:19:11.084587 | orchestrator | 2026-03-10 01:19:11.084594 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-10 01:19:11.084601 | orchestrator | Tuesday 10 March 2026 01:16:14 +0000 (0:00:05.735) 0:02:01.532 ********* 2026-03-10 01:19:11.084609 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:19:11.084616 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:19:11.084623 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:19:11.084631 | orchestrator | 2026-03-10 01:19:11.084638 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-10 01:19:11.084646 | orchestrator | Tuesday 10 March 2026 01:16:19 +0000 (0:00:04.858) 0:02:06.391 ********* 2026-03-10 01:19:11.084653 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:19:11.084660 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:19:11.084668 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:19:11.084675 | orchestrator | 2026-03-10 01:19:11.084682 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-10 01:19:11.084690 | orchestrator | Tuesday 10 March 2026 01:16:20 +0000 (0:00:00.824) 0:02:07.216 ********* 2026-03-10 01:19:11.084697 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:19:11.084704 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:19:11.084712 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:19:11.084719 | orchestrator | 2026-03-10 01:19:11.084726 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-10 01:19:11.084733 | orchestrator | Tuesday 10 March 2026 01:16:22 +0000 (0:00:02.085) 0:02:09.302 ********* 2026-03-10 01:19:11.084741 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:19:11.084748 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:19:11.084755 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:19:11.084762 | orchestrator | 2026-03-10 01:19:11.084770 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-10 01:19:11.084777 | orchestrator | Tuesday 10 March 2026 01:16:23 +0000 (0:00:01.508) 0:02:10.810 ********* 2026-03-10 01:19:11.084784 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:19:11.084791 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:19:11.084799 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:19:11.084811 | orchestrator | 2026-03-10 01:19:11.084818 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-10 01:19:11.084826 | orchestrator | Tuesday 10 March 2026 01:16:25 +0000 (0:00:01.228) 0:02:12.039 ********* 2026-03-10 01:19:11.084833 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:19:11.084840 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:19:11.084847 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:19:11.084876 | orchestrator | 2026-03-10 01:19:11.084884 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-10 01:19:11.084891 | orchestrator | Tuesday 10 March 2026 01:16:27 +0000 (0:00:02.243) 0:02:14.282 ********* 2026-03-10 01:19:11.084898 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:19:11.084906 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:19:11.084913 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:19:11.084921 | orchestrator | 2026-03-10 01:19:11.084933 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-10 01:19:11.084940 | orchestrator | Tuesday 10 March 2026 01:16:29 +0000 (0:00:01.806) 0:02:16.089 ********* 2026-03-10 01:19:11.084948 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:19:11.084955 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:19:11.084962 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:19:11.084969 | orchestrator | 2026-03-10 01:19:11.084977 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-10 01:19:11.084984 | orchestrator | Tuesday 10 March 2026 01:16:29 +0000 (0:00:00.656) 0:02:16.745 ********* 2026-03-10 01:19:11.084991 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:19:11.084998 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:19:11.085006 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:19:11.085013 | orchestrator | 2026-03-10 01:19:11.085020 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-10 01:19:11.085027 | orchestrator | Tuesday 10 March 2026 01:16:32 +0000 (0:00:03.037) 0:02:19.783 ********* 2026-03-10 01:19:11.085034 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:19:11.085042 | orchestrator | 2026-03-10 01:19:11.085049 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-10 01:19:11.085056 | orchestrator | Tuesday 10 March 2026 01:16:33 +0000 (0:00:00.781) 0:02:20.565 ********* 2026-03-10 01:19:11.085063 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:19:11.085071 | orchestrator | 2026-03-10 01:19:11.085078 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-10 01:19:11.085085 | orchestrator | Tuesday 10 March 2026 01:16:37 +0000 (0:00:03.948) 0:02:24.514 ********* 2026-03-10 01:19:11.085092 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:19:11.085099 | orchestrator | 2026-03-10 01:19:11.085106 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-10 01:19:11.085114 | orchestrator | Tuesday 10 March 2026 01:16:40 +0000 (0:00:03.323) 0:02:27.837 ********* 2026-03-10 01:19:11.085121 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-10 01:19:11.085129 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-10 01:19:11.085136 | orchestrator | 2026-03-10 01:19:11.085143 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-10 01:19:11.085151 | orchestrator | Tuesday 10 March 2026 01:16:48 +0000 (0:00:07.077) 0:02:34.914 ********* 2026-03-10 01:19:11.085158 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:19:11.085165 | orchestrator | 2026-03-10 01:19:11.085172 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-10 01:19:11.085179 | orchestrator | Tuesday 10 March 2026 01:16:51 +0000 (0:00:03.542) 0:02:38.457 ********* 2026-03-10 01:19:11.085186 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:19:11.085194 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:19:11.085201 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:19:11.085208 | orchestrator | 2026-03-10 01:19:11.085215 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-10 01:19:11.085228 | orchestrator | Tuesday 10 March 2026 01:16:51 +0000 (0:00:00.363) 0:02:38.820 ********* 2026-03-10 01:19:11.085243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-10 01:19:11.085256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-10 01:19:11.085289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-10 01:19:11.085300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-10 01:19:11.085310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-10 01:19:11.085338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-10 01:19:11.085348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-10 01:19:11.085358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-10 01:19:11.085370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-10 01:19:11.085379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-10 01:19:11.085388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-10 01:19:11.085396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-10 01:19:11.085433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:19:11.085443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:19:11.085450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:19:11.085458 | orchestrator | 2026-03-10 01:19:11.085466 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-10 01:19:11.085473 | orchestrator | Tuesday 10 March 2026 01:16:54 +0000 (0:00:02.584) 0:02:41.405 ********* 2026-03-10 01:19:11.085481 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:19:11.085488 | orchestrator | 2026-03-10 01:19:11.085499 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-10 01:19:11.085507 | orchestrator | Tuesday 10 March 2026 01:16:54 +0000 (0:00:00.159) 0:02:41.564 ********* 2026-03-10 01:19:11.085514 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:19:11.085522 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:19:11.085529 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:19:11.085537 | orchestrator | 2026-03-10 01:19:11.085544 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-10 01:19:11.085551 | orchestrator | Tuesday 10 March 2026 01:16:55 +0000 (0:00:00.616) 0:02:42.181 ********* 2026-03-10 01:19:11.085559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-10 01:19:11.085580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-10 01:19:11.085603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-10 01:19:11.085611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-10 01:19:11.085619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:19:11.085627 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:19:11.085642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-10 01:19:11.085650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-10 01:19:11.085664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-10 01:19:11.085675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-10 01:19:11.085683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:19:11.085691 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:19:11.085699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-10 01:19:11.086283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-10 01:19:11.086311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-10 01:19:11.086329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-10 01:19:11.086354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:19:11.086362 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:19:11.086370 | orchestrator | 2026-03-10 01:19:11.086378 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-10 01:19:11.086386 | orchestrator | Tuesday 10 March 2026 01:16:56 +0000 (0:00:00.736) 0:02:42.918 ********* 2026-03-10 01:19:11.086393 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:19:11.086400 | orchestrator | 2026-03-10 01:19:11.086408 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-10 01:19:11.086415 | orchestrator | Tuesday 10 March 2026 01:16:56 +0000 (0:00:00.598) 0:02:43.516 ********* 2026-03-10 01:19:11.086423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-10 01:19:11.086440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-10 01:19:11.086498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-10 01:19:11.086507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-10 01:19:11.086520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-10 01:19:11.086528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-10 01:19:11.086536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-10 01:19:11.086551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-10 01:19:11.086564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-10 01:19:11.086573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-10 01:19:11.086594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-10 01:19:11.086603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-10 01:19:11.086611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:19:11.086619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:19:11.086632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:19:11.086647 | orchestrator | 2026-03-10 01:19:11.086654 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-10 01:19:11.086662 | orchestrator | Tuesday 10 March 2026 01:17:02 +0000 (0:00:05.744) 0:02:49.261 ********* 2026-03-10 01:19:11.086681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-10 01:19:11.086704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-10 01:19:11.086713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-10 01:19:11.086722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-10 01:19:11.086730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:19:11.086744 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:19:11.086759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-10 01:19:11.086767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-10 01:19:11.086775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-10 01:19:11.086787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-10 01:19:11.086795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:19:11.086802 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:19:11.086815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-10 01:19:11.086840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-10 01:19:11.086849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-10 01:19:11.086876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-10 01:19:11.086892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:19:11.086901 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:19:11.086909 | orchestrator | 2026-03-10 01:19:11.086917 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-10 01:19:11.086926 | orchestrator | Tuesday 10 March 2026 01:17:03 +0000 (0:00:00.798) 0:02:50.059 ********* 2026-03-10 01:19:11.086935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-10 01:19:11.086954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-10 01:19:11.086965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-10 01:19:11.086979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-10 01:19:11.086991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:19:11.087003 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:19:11.087022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-10 01:19:11.087036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-10 01:19:11.087061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-10 01:19:11.087075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-10 01:19:11.087088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:19:11.087101 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:19:11.087124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-10 01:19:11.087133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-10 01:19:11.087141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-10 01:19:11.087156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-10 01:19:11.087168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:19:11.087176 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:19:11.087184 | orchestrator | 2026-03-10 01:19:11.087191 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-10 01:19:11.087198 | orchestrator | Tuesday 10 March 2026 01:17:04 +0000 (0:00:00.942) 0:02:51.002 ********* 2026-03-10 01:19:11.087206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-10 01:19:11.087218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-10 01:19:11.087226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-10 01:19:11.087244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-10 01:19:11.087253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-10 01:19:11.087260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-10 01:19:11.087268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-10 01:19:11.087280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-10 01:19:11.087288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-10 01:19:11.087301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-10 01:19:11.087312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-10 01:19:11.087320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-10 01:19:11.087328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:19:11.087335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:19:11.087347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:19:11.087359 | orchestrator | 2026-03-10 01:19:11.087367 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-10 01:19:11.087374 | orchestrator | Tuesday 10 March 2026 01:17:09 +0000 (0:00:05.300) 0:02:56.302 ********* 2026-03-10 01:19:11.087381 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-10 01:19:11.087389 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-10 01:19:11.087396 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-10 01:19:11.087404 | orchestrator | 2026-03-10 01:19:11.087411 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-10 01:19:11.087418 | orchestrator | Tuesday 10 March 2026 01:17:11 +0000 (0:00:02.012) 0:02:58.315 ********* 2026-03-10 01:19:11.087430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-10 01:19:11.087438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-10 01:19:11.087446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-10 01:19:11.087466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-10 01:19:11.087480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-10 01:19:11.087488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-10 01:19:11.087500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-10 01:19:11.087509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-10 01:19:11.087516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-10 01:19:11.087524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-10 01:19:11.087542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-10 01:19:11.087550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-10 01:19:11.087558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:19:11.087571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:19:11.087581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:19:11.087590 | orchestrator | 2026-03-10 01:19:11.087599 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-10 01:19:11.087608 | orchestrator | Tuesday 10 March 2026 01:17:29 +0000 (0:00:17.804) 0:03:16.119 ********* 2026-03-10 01:19:11.087617 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:19:11.087626 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:19:11.087635 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:19:11.087643 | orchestrator | 2026-03-10 01:19:11.087652 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-10 01:19:11.087661 | orchestrator | Tuesday 10 March 2026 01:17:30 +0000 (0:00:01.684) 0:03:17.803 ********* 2026-03-10 01:19:11.087670 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-10 01:19:11.087685 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-10 01:19:11.087694 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-10 01:19:11.087703 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-10 01:19:11.087711 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-10 01:19:11.087720 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-10 01:19:11.087729 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-10 01:19:11.087738 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-10 01:19:11.087746 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-10 01:19:11.087756 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-10 01:19:11.087765 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-10 01:19:11.087778 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-10 01:19:11.087787 | orchestrator | 2026-03-10 01:19:11.087796 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-10 01:19:11.087805 | orchestrator | Tuesday 10 March 2026 01:17:36 +0000 (0:00:05.703) 0:03:23.507 ********* 2026-03-10 01:19:11.087813 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-10 01:19:11.087822 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-10 01:19:11.087830 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-10 01:19:11.087839 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-10 01:19:11.087848 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-10 01:19:11.087905 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-10 01:19:11.087915 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-10 01:19:11.087923 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-10 01:19:11.087932 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-10 01:19:11.087941 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-10 01:19:11.087949 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-10 01:19:11.087958 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-10 01:19:11.087967 | orchestrator | 2026-03-10 01:19:11.087975 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-10 01:19:11.087984 | orchestrator | Tuesday 10 March 2026 01:17:42 +0000 (0:00:05.792) 0:03:29.300 ********* 2026-03-10 01:19:11.087993 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-10 01:19:11.088001 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-10 01:19:11.088010 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-10 01:19:11.088019 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-10 01:19:11.088027 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-10 01:19:11.088037 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-10 01:19:11.088046 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-10 01:19:11.088054 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-10 01:19:11.088063 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-10 01:19:11.088072 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-10 01:19:11.088080 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-10 01:19:11.088089 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-10 01:19:11.088098 | orchestrator | 2026-03-10 01:19:11.088112 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-03-10 01:19:11.088122 | orchestrator | Tuesday 10 March 2026 01:17:47 +0000 (0:00:05.586) 0:03:34.887 ********* 2026-03-10 01:19:11.088140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-10 01:19:11.088150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-10 01:19:11.088176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-10 01:19:11.088185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-10 01:19:11.088194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-10 01:19:11.088214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-10 01:19:11.088223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-10 01:19:11.088232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-10 01:19:11.088257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-10 01:19:11.088266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-10 01:19:11.088274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-10 01:19:11.088412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-10 01:19:11.088432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:19:11.088441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:19:11.088449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:19:11.088457 | orchestrator | 2026-03-10 01:19:11.088466 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-10 01:19:11.088474 | orchestrator | Tuesday 10 March 2026 01:17:51 +0000 (0:00:03.860) 0:03:38.747 ********* 2026-03-10 01:19:11.088487 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:19:11.088495 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:19:11.088503 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:19:11.088511 | orchestrator | 2026-03-10 01:19:11.088519 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-10 01:19:11.088527 | orchestrator | Tuesday 10 March 2026 01:17:52 +0000 (0:00:00.395) 0:03:39.142 ********* 2026-03-10 01:19:11.088535 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:19:11.088542 | orchestrator | 2026-03-10 01:19:11.088550 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-10 01:19:11.088558 | orchestrator | Tuesday 10 March 2026 01:17:54 +0000 (0:00:02.228) 0:03:41.371 ********* 2026-03-10 01:19:11.088566 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:19:11.088574 | orchestrator | 2026-03-10 01:19:11.088582 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-10 01:19:11.088590 | orchestrator | Tuesday 10 March 2026 01:17:56 +0000 (0:00:02.269) 0:03:43.640 ********* 2026-03-10 01:19:11.088598 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:19:11.088606 | orchestrator | 2026-03-10 01:19:11.088614 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-10 01:19:11.088622 | orchestrator | Tuesday 10 March 2026 01:17:59 +0000 (0:00:02.457) 0:03:46.098 ********* 2026-03-10 01:19:11.088630 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:19:11.088637 | orchestrator | 2026-03-10 01:19:11.088645 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-10 01:19:11.088658 | orchestrator | Tuesday 10 March 2026 01:18:02 +0000 (0:00:03.075) 0:03:49.173 ********* 2026-03-10 01:19:11.088666 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:19:11.088674 | orchestrator | 2026-03-10 01:19:11.088681 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-10 01:19:11.088689 | orchestrator | Tuesday 10 March 2026 01:18:26 +0000 (0:00:24.125) 0:04:13.299 ********* 2026-03-10 01:19:11.088697 | orchestrator | 2026-03-10 01:19:11.088705 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-10 01:19:11.088713 | orchestrator | Tuesday 10 March 2026 01:18:26 +0000 (0:00:00.087) 0:04:13.387 ********* 2026-03-10 01:19:11.088721 | orchestrator | 2026-03-10 01:19:11.088729 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-10 01:19:11.088737 | orchestrator | Tuesday 10 March 2026 01:18:26 +0000 (0:00:00.066) 0:04:13.453 ********* 2026-03-10 01:19:11.088744 | orchestrator | 2026-03-10 01:19:11.088752 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-10 01:19:11.088760 | orchestrator | Tuesday 10 March 2026 01:18:26 +0000 (0:00:00.077) 0:04:13.531 ********* 2026-03-10 01:19:11.088768 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:19:11.088776 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:19:11.088784 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:19:11.088792 | orchestrator | 2026-03-10 01:19:11.088800 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-10 01:19:11.088811 | orchestrator | Tuesday 10 March 2026 01:18:39 +0000 (0:00:12.526) 0:04:26.057 ********* 2026-03-10 01:19:11.088820 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:19:11.088828 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:19:11.088836 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:19:11.088844 | orchestrator | 2026-03-10 01:19:11.088870 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-10 01:19:11.088879 | orchestrator | Tuesday 10 March 2026 01:18:46 +0000 (0:00:07.128) 0:04:33.186 ********* 2026-03-10 01:19:11.088887 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:19:11.088895 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:19:11.088903 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:19:11.088910 | orchestrator | 2026-03-10 01:19:11.088918 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-10 01:19:11.088926 | orchestrator | Tuesday 10 March 2026 01:18:52 +0000 (0:00:06.650) 0:04:39.836 ********* 2026-03-10 01:19:11.088935 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:19:11.088943 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:19:11.088950 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:19:11.088958 | orchestrator | 2026-03-10 01:19:11.088966 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-10 01:19:11.088974 | orchestrator | Tuesday 10 March 2026 01:19:03 +0000 (0:00:10.813) 0:04:50.649 ********* 2026-03-10 01:19:11.088983 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:19:11.088991 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:19:11.088999 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:19:11.089007 | orchestrator | 2026-03-10 01:19:11.089015 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:19:11.089024 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-10 01:19:11.089034 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-10 01:19:11.089043 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-10 01:19:11.089052 | orchestrator | 2026-03-10 01:19:11.089061 | orchestrator | 2026-03-10 01:19:11.089070 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:19:11.089085 | orchestrator | Tuesday 10 March 2026 01:19:10 +0000 (0:00:06.275) 0:04:56.925 ********* 2026-03-10 01:19:11.089094 | orchestrator | =============================================================================== 2026-03-10 01:19:11.089104 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 24.13s 2026-03-10 01:19:11.089113 | orchestrator | octavia : Add rules for security groups -------------------------------- 18.96s 2026-03-10 01:19:11.089123 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 17.80s 2026-03-10 01:19:11.089148 | orchestrator | octavia : Adding octavia related roles --------------------------------- 17.01s 2026-03-10 01:19:11.089158 | orchestrator | octavia : Restart octavia-api container -------------------------------- 12.53s 2026-03-10 01:19:11.089167 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.81s 2026-03-10 01:19:11.089176 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.41s 2026-03-10 01:19:11.089185 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.96s 2026-03-10 01:19:11.089194 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 8.07s 2026-03-10 01:19:11.089203 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 7.13s 2026-03-10 01:19:11.089212 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.08s 2026-03-10 01:19:11.089221 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.93s 2026-03-10 01:19:11.089230 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 6.65s 2026-03-10 01:19:11.089239 | orchestrator | octavia : Create loadbalancer management network ------------------------ 6.41s 2026-03-10 01:19:11.089248 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 6.28s 2026-03-10 01:19:11.089257 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.94s 2026-03-10 01:19:11.089266 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.79s 2026-03-10 01:19:11.089276 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.74s 2026-03-10 01:19:11.089285 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.74s 2026-03-10 01:19:11.089294 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.70s 2026-03-10 01:19:14.124051 | orchestrator | 2026-03-10 01:19:14 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:19:17.165820 | orchestrator | 2026-03-10 01:19:17 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:19:20.200180 | orchestrator | 2026-03-10 01:19:20 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:19:23.240393 | orchestrator | 2026-03-10 01:19:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:19:26.283451 | orchestrator | 2026-03-10 01:19:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:19:29.334000 | orchestrator | 2026-03-10 01:19:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:19:32.376627 | orchestrator | 2026-03-10 01:19:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:19:35.421407 | orchestrator | 2026-03-10 01:19:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:19:38.461373 | orchestrator | 2026-03-10 01:19:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:19:41.506510 | orchestrator | 2026-03-10 01:19:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:19:44.543621 | orchestrator | 2026-03-10 01:19:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:19:47.581179 | orchestrator | 2026-03-10 01:19:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:19:50.624256 | orchestrator | 2026-03-10 01:19:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:19:53.669982 | orchestrator | 2026-03-10 01:19:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:19:56.715497 | orchestrator | 2026-03-10 01:19:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:19:59.757173 | orchestrator | 2026-03-10 01:19:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:20:02.798334 | orchestrator | 2026-03-10 01:20:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:20:05.841516 | orchestrator | 2026-03-10 01:20:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:20:08.884183 | orchestrator | 2026-03-10 01:20:08 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:20:11.917422 | orchestrator | 2026-03-10 01:20:12.341221 | orchestrator | 2026-03-10 01:20:12.347848 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Tue Mar 10 01:20:12 UTC 2026 2026-03-10 01:20:12.347969 | orchestrator | 2026-03-10 01:20:12.690391 | orchestrator | ok: Runtime: 0:39:17.108171 2026-03-10 01:20:12.950966 | 2026-03-10 01:20:12.951110 | TASK [Bootstrap services] 2026-03-10 01:20:13.698408 | orchestrator | 2026-03-10 01:20:13.698721 | orchestrator | # BOOTSTRAP 2026-03-10 01:20:13.698749 | orchestrator | 2026-03-10 01:20:13.698764 | orchestrator | + set -e 2026-03-10 01:20:13.698777 | orchestrator | + echo 2026-03-10 01:20:13.698791 | orchestrator | + echo '# BOOTSTRAP' 2026-03-10 01:20:13.698839 | orchestrator | + echo 2026-03-10 01:20:13.698892 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-10 01:20:13.706007 | orchestrator | + set -e 2026-03-10 01:20:13.706137 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-10 01:20:19.714516 | orchestrator | 2026-03-10 01:20:19 | INFO  | It takes a moment until task c4705ffc-aed9-4503-a39d-02b9713090d0 (flavor-manager) has been started and output is visible here. 2026-03-10 01:20:28.295757 | orchestrator | 2026-03-10 01:20:22 | INFO  | Flavor SCS-1L-1 created 2026-03-10 01:20:28.295944 | orchestrator | 2026-03-10 01:20:23 | INFO  | Flavor SCS-1L-1-5 created 2026-03-10 01:20:28.295961 | orchestrator | 2026-03-10 01:20:23 | INFO  | Flavor SCS-1V-2 created 2026-03-10 01:20:28.295970 | orchestrator | 2026-03-10 01:20:24 | INFO  | Flavor SCS-1V-2-5 created 2026-03-10 01:20:28.295979 | orchestrator | 2026-03-10 01:20:24 | INFO  | Flavor SCS-1V-4 created 2026-03-10 01:20:28.295987 | orchestrator | 2026-03-10 01:20:24 | INFO  | Flavor SCS-1V-4-10 created 2026-03-10 01:20:28.295995 | orchestrator | 2026-03-10 01:20:24 | INFO  | Flavor SCS-1V-8 created 2026-03-10 01:20:28.296005 | orchestrator | 2026-03-10 01:20:24 | INFO  | Flavor SCS-1V-8-20 created 2026-03-10 01:20:28.296025 | orchestrator | 2026-03-10 01:20:24 | INFO  | Flavor SCS-2V-4 created 2026-03-10 01:20:28.296033 | orchestrator | 2026-03-10 01:20:25 | INFO  | Flavor SCS-2V-4-10 created 2026-03-10 01:20:28.296041 | orchestrator | 2026-03-10 01:20:25 | INFO  | Flavor SCS-2V-8 created 2026-03-10 01:20:28.296049 | orchestrator | 2026-03-10 01:20:25 | INFO  | Flavor SCS-2V-8-20 created 2026-03-10 01:20:28.296057 | orchestrator | 2026-03-10 01:20:25 | INFO  | Flavor SCS-2V-16 created 2026-03-10 01:20:28.296065 | orchestrator | 2026-03-10 01:20:25 | INFO  | Flavor SCS-2V-16-50 created 2026-03-10 01:20:28.296073 | orchestrator | 2026-03-10 01:20:25 | INFO  | Flavor SCS-4V-8 created 2026-03-10 01:20:28.296080 | orchestrator | 2026-03-10 01:20:26 | INFO  | Flavor SCS-4V-8-20 created 2026-03-10 01:20:28.296088 | orchestrator | 2026-03-10 01:20:26 | INFO  | Flavor SCS-4V-16 created 2026-03-10 01:20:28.296096 | orchestrator | 2026-03-10 01:20:26 | INFO  | Flavor SCS-4V-16-50 created 2026-03-10 01:20:28.296104 | orchestrator | 2026-03-10 01:20:26 | INFO  | Flavor SCS-4V-32 created 2026-03-10 01:20:28.296112 | orchestrator | 2026-03-10 01:20:26 | INFO  | Flavor SCS-4V-32-100 created 2026-03-10 01:20:28.296120 | orchestrator | 2026-03-10 01:20:26 | INFO  | Flavor SCS-8V-16 created 2026-03-10 01:20:28.296128 | orchestrator | 2026-03-10 01:20:26 | INFO  | Flavor SCS-8V-16-50 created 2026-03-10 01:20:28.296136 | orchestrator | 2026-03-10 01:20:27 | INFO  | Flavor SCS-8V-32 created 2026-03-10 01:20:28.296144 | orchestrator | 2026-03-10 01:20:27 | INFO  | Flavor SCS-8V-32-100 created 2026-03-10 01:20:28.296151 | orchestrator | 2026-03-10 01:20:27 | INFO  | Flavor SCS-16V-32 created 2026-03-10 01:20:28.296159 | orchestrator | 2026-03-10 01:20:27 | INFO  | Flavor SCS-16V-32-100 created 2026-03-10 01:20:28.296167 | orchestrator | 2026-03-10 01:20:27 | INFO  | Flavor SCS-2V-4-20s created 2026-03-10 01:20:28.296175 | orchestrator | 2026-03-10 01:20:27 | INFO  | Flavor SCS-4V-8-50s created 2026-03-10 01:20:28.296183 | orchestrator | 2026-03-10 01:20:28 | INFO  | Flavor SCS-8V-32-100s created 2026-03-10 01:20:30.776359 | orchestrator | 2026-03-10 01:20:30 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-10 01:20:30.867945 | orchestrator | 2026-03-10 01:20:30 | INFO  | Task 7f0c4c32-f3d1-45d1-b0dd-6fd9166b2b2e (bootstrap-basic) was prepared for execution. 2026-03-10 01:20:30.868963 | orchestrator | 2026-03-10 01:20:30 | INFO  | It takes a moment until task 7f0c4c32-f3d1-45d1-b0dd-6fd9166b2b2e (bootstrap-basic) has been started and output is visible here. 2026-03-10 01:21:23.416994 | orchestrator | 2026-03-10 01:21:23.417100 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-10 01:21:23.417114 | orchestrator | 2026-03-10 01:21:23.417124 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-10 01:21:23.417134 | orchestrator | Tuesday 10 March 2026 01:20:35 +0000 (0:00:00.097) 0:00:00.097 ********* 2026-03-10 01:21:23.417143 | orchestrator | ok: [localhost] 2026-03-10 01:21:23.417152 | orchestrator | 2026-03-10 01:21:23.417161 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-10 01:21:23.417170 | orchestrator | Tuesday 10 March 2026 01:20:37 +0000 (0:00:01.963) 0:00:02.061 ********* 2026-03-10 01:21:23.417179 | orchestrator | ok: [localhost] 2026-03-10 01:21:23.417188 | orchestrator | 2026-03-10 01:21:23.417197 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-10 01:21:23.417206 | orchestrator | Tuesday 10 March 2026 01:20:49 +0000 (0:00:12.055) 0:00:14.116 ********* 2026-03-10 01:21:23.417215 | orchestrator | changed: [localhost] 2026-03-10 01:21:23.417224 | orchestrator | 2026-03-10 01:21:23.417232 | orchestrator | TASK [Create public network] *************************************************** 2026-03-10 01:21:23.417242 | orchestrator | Tuesday 10 March 2026 01:20:57 +0000 (0:00:07.900) 0:00:22.016 ********* 2026-03-10 01:21:23.417251 | orchestrator | changed: [localhost] 2026-03-10 01:21:23.417260 | orchestrator | 2026-03-10 01:21:23.417269 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-10 01:21:23.417277 | orchestrator | Tuesday 10 March 2026 01:21:02 +0000 (0:00:05.387) 0:00:27.404 ********* 2026-03-10 01:21:23.417290 | orchestrator | changed: [localhost] 2026-03-10 01:21:23.417300 | orchestrator | 2026-03-10 01:21:23.417309 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-10 01:21:23.417317 | orchestrator | Tuesday 10 March 2026 01:21:10 +0000 (0:00:07.273) 0:00:34.677 ********* 2026-03-10 01:21:23.417326 | orchestrator | changed: [localhost] 2026-03-10 01:21:23.417335 | orchestrator | 2026-03-10 01:21:23.417343 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-10 01:21:23.417352 | orchestrator | Tuesday 10 March 2026 01:21:15 +0000 (0:00:04.841) 0:00:39.519 ********* 2026-03-10 01:21:23.417361 | orchestrator | changed: [localhost] 2026-03-10 01:21:23.417370 | orchestrator | 2026-03-10 01:21:23.417378 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-10 01:21:23.417396 | orchestrator | Tuesday 10 March 2026 01:21:19 +0000 (0:00:04.164) 0:00:43.684 ********* 2026-03-10 01:21:23.417405 | orchestrator | ok: [localhost] 2026-03-10 01:21:23.417414 | orchestrator | 2026-03-10 01:21:23.417423 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:21:23.417432 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:21:23.417441 | orchestrator | 2026-03-10 01:21:23.417450 | orchestrator | 2026-03-10 01:21:23.417459 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:21:23.417468 | orchestrator | Tuesday 10 March 2026 01:21:23 +0000 (0:00:03.943) 0:00:47.628 ********* 2026-03-10 01:21:23.417476 | orchestrator | =============================================================================== 2026-03-10 01:21:23.417485 | orchestrator | Get volume type LUKS --------------------------------------------------- 12.06s 2026-03-10 01:21:23.417494 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.90s 2026-03-10 01:21:23.417502 | orchestrator | Set public network to default ------------------------------------------- 7.27s 2026-03-10 01:21:23.417511 | orchestrator | Create public network --------------------------------------------------- 5.39s 2026-03-10 01:21:23.417540 | orchestrator | Create public subnet ---------------------------------------------------- 4.84s 2026-03-10 01:21:23.417551 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.16s 2026-03-10 01:21:23.417562 | orchestrator | Create manager role ----------------------------------------------------- 3.94s 2026-03-10 01:21:23.417571 | orchestrator | Gathering Facts --------------------------------------------------------- 1.96s 2026-03-10 01:21:25.976106 | orchestrator | 2026-03-10 01:21:25 | INFO  | It takes a moment until task e06c5c2f-70d3-456f-9f9d-99561d81f2f3 (image-manager) has been started and output is visible here. 2026-03-10 01:22:11.284661 | orchestrator | 2026-03-10 01:21:28 | INFO  | Processing image 'Cirros 0.6.2' 2026-03-10 01:22:11.284794 | orchestrator | 2026-03-10 01:21:29 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-03-10 01:22:11.284805 | orchestrator | 2026-03-10 01:21:29 | INFO  | Importing image Cirros 0.6.2 2026-03-10 01:22:11.284811 | orchestrator | 2026-03-10 01:21:29 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-10 01:22:11.284816 | orchestrator | 2026-03-10 01:21:31 | INFO  | Waiting for image to leave queued state... 2026-03-10 01:22:11.284821 | orchestrator | 2026-03-10 01:21:33 | INFO  | Waiting for import to complete... 2026-03-10 01:22:11.284825 | orchestrator | 2026-03-10 01:21:43 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-03-10 01:22:11.284830 | orchestrator | 2026-03-10 01:21:44 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-03-10 01:22:11.284834 | orchestrator | 2026-03-10 01:21:44 | INFO  | Setting internal_version = 0.6.2 2026-03-10 01:22:11.284838 | orchestrator | 2026-03-10 01:21:44 | INFO  | Setting image_original_user = cirros 2026-03-10 01:22:11.284843 | orchestrator | 2026-03-10 01:21:44 | INFO  | Adding tag os:cirros 2026-03-10 01:22:11.284846 | orchestrator | 2026-03-10 01:21:44 | INFO  | Setting property architecture: x86_64 2026-03-10 01:22:11.284850 | orchestrator | 2026-03-10 01:21:44 | INFO  | Setting property hw_disk_bus: scsi 2026-03-10 01:22:11.284854 | orchestrator | 2026-03-10 01:21:44 | INFO  | Setting property hw_rng_model: virtio 2026-03-10 01:22:11.284858 | orchestrator | 2026-03-10 01:21:45 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-10 01:22:11.284862 | orchestrator | 2026-03-10 01:21:45 | INFO  | Setting property hw_watchdog_action: reset 2026-03-10 01:22:11.284866 | orchestrator | 2026-03-10 01:21:45 | INFO  | Setting property hypervisor_type: qemu 2026-03-10 01:22:11.284869 | orchestrator | 2026-03-10 01:21:46 | INFO  | Setting property os_distro: cirros 2026-03-10 01:22:11.284873 | orchestrator | 2026-03-10 01:21:46 | INFO  | Setting property os_purpose: minimal 2026-03-10 01:22:11.284877 | orchestrator | 2026-03-10 01:21:46 | INFO  | Setting property replace_frequency: never 2026-03-10 01:22:11.284881 | orchestrator | 2026-03-10 01:21:46 | INFO  | Setting property uuid_validity: none 2026-03-10 01:22:11.284884 | orchestrator | 2026-03-10 01:21:47 | INFO  | Setting property provided_until: none 2026-03-10 01:22:11.284888 | orchestrator | 2026-03-10 01:21:47 | INFO  | Setting property image_description: Cirros 2026-03-10 01:22:11.284892 | orchestrator | 2026-03-10 01:21:47 | INFO  | Setting property image_name: Cirros 2026-03-10 01:22:11.284895 | orchestrator | 2026-03-10 01:21:48 | INFO  | Setting property internal_version: 0.6.2 2026-03-10 01:22:11.284899 | orchestrator | 2026-03-10 01:21:48 | INFO  | Setting property image_original_user: cirros 2026-03-10 01:22:11.284921 | orchestrator | 2026-03-10 01:21:48 | INFO  | Setting property os_version: 0.6.2 2026-03-10 01:22:11.284931 | orchestrator | 2026-03-10 01:21:48 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-10 01:22:11.284936 | orchestrator | 2026-03-10 01:21:49 | INFO  | Setting property image_build_date: 2023-05-30 2026-03-10 01:22:11.284940 | orchestrator | 2026-03-10 01:21:49 | INFO  | Checking status of 'Cirros 0.6.2' 2026-03-10 01:22:11.284944 | orchestrator | 2026-03-10 01:21:49 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-03-10 01:22:11.284947 | orchestrator | 2026-03-10 01:21:49 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-03-10 01:22:11.284951 | orchestrator | 2026-03-10 01:21:50 | INFO  | Processing image 'Cirros 0.6.3' 2026-03-10 01:22:11.284958 | orchestrator | 2026-03-10 01:21:50 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-03-10 01:22:11.284962 | orchestrator | 2026-03-10 01:21:50 | INFO  | Importing image Cirros 0.6.3 2026-03-10 01:22:11.284965 | orchestrator | 2026-03-10 01:21:50 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-10 01:22:11.284969 | orchestrator | 2026-03-10 01:21:51 | INFO  | Waiting for image to leave queued state... 2026-03-10 01:22:11.284973 | orchestrator | 2026-03-10 01:21:53 | INFO  | Waiting for import to complete... 2026-03-10 01:22:11.284989 | orchestrator | 2026-03-10 01:22:04 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-03-10 01:22:11.284993 | orchestrator | 2026-03-10 01:22:04 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-03-10 01:22:11.285005 | orchestrator | 2026-03-10 01:22:04 | INFO  | Setting internal_version = 0.6.3 2026-03-10 01:22:11.285009 | orchestrator | 2026-03-10 01:22:04 | INFO  | Setting image_original_user = cirros 2026-03-10 01:22:11.285013 | orchestrator | 2026-03-10 01:22:04 | INFO  | Adding tag os:cirros 2026-03-10 01:22:11.285022 | orchestrator | 2026-03-10 01:22:04 | INFO  | Setting property architecture: x86_64 2026-03-10 01:22:11.285025 | orchestrator | 2026-03-10 01:22:05 | INFO  | Setting property hw_disk_bus: scsi 2026-03-10 01:22:11.285029 | orchestrator | 2026-03-10 01:22:05 | INFO  | Setting property hw_rng_model: virtio 2026-03-10 01:22:11.285033 | orchestrator | 2026-03-10 01:22:05 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-10 01:22:11.285037 | orchestrator | 2026-03-10 01:22:06 | INFO  | Setting property hw_watchdog_action: reset 2026-03-10 01:22:11.285040 | orchestrator | 2026-03-10 01:22:06 | INFO  | Setting property hypervisor_type: qemu 2026-03-10 01:22:11.285044 | orchestrator | 2026-03-10 01:22:06 | INFO  | Setting property os_distro: cirros 2026-03-10 01:22:11.285048 | orchestrator | 2026-03-10 01:22:07 | INFO  | Setting property os_purpose: minimal 2026-03-10 01:22:11.285052 | orchestrator | 2026-03-10 01:22:07 | INFO  | Setting property replace_frequency: never 2026-03-10 01:22:11.285056 | orchestrator | 2026-03-10 01:22:07 | INFO  | Setting property uuid_validity: none 2026-03-10 01:22:11.285060 | orchestrator | 2026-03-10 01:22:07 | INFO  | Setting property provided_until: none 2026-03-10 01:22:11.285063 | orchestrator | 2026-03-10 01:22:08 | INFO  | Setting property image_description: Cirros 2026-03-10 01:22:11.285067 | orchestrator | 2026-03-10 01:22:08 | INFO  | Setting property image_name: Cirros 2026-03-10 01:22:11.285071 | orchestrator | 2026-03-10 01:22:08 | INFO  | Setting property internal_version: 0.6.3 2026-03-10 01:22:11.285079 | orchestrator | 2026-03-10 01:22:09 | INFO  | Setting property image_original_user: cirros 2026-03-10 01:22:11.285083 | orchestrator | 2026-03-10 01:22:09 | INFO  | Setting property os_version: 0.6.3 2026-03-10 01:22:11.285087 | orchestrator | 2026-03-10 01:22:09 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-10 01:22:11.285091 | orchestrator | 2026-03-10 01:22:09 | INFO  | Setting property image_build_date: 2024-09-26 2026-03-10 01:22:11.285094 | orchestrator | 2026-03-10 01:22:10 | INFO  | Checking status of 'Cirros 0.6.3' 2026-03-10 01:22:11.285098 | orchestrator | 2026-03-10 01:22:10 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-03-10 01:22:11.285102 | orchestrator | 2026-03-10 01:22:10 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-03-10 01:22:11.653318 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-03-10 01:22:14.172214 | orchestrator | 2026-03-10 01:22:14 | INFO  | date: 2026-03-09 2026-03-10 01:22:14.172337 | orchestrator | 2026-03-10 01:22:14 | INFO  | image: octavia-amphora-haproxy-2024.2.20260309.qcow2 2026-03-10 01:22:14.172387 | orchestrator | 2026-03-10 01:22:14 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260309.qcow2 2026-03-10 01:22:14.172410 | orchestrator | 2026-03-10 01:22:14 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260309.qcow2.CHECKSUM 2026-03-10 01:22:14.334918 | orchestrator | 2026-03-10 01:22:14 | INFO  | checksum: localhost | ok: "/var/lib/zuul/builds/a7702c3a2f3542948a8cdd7d9e63fdb8/work/logs" 2026-03-10 01:22:44.842436 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/a7702c3a2f3542948a8cdd7d9e63fdb8/work/artifacts" 2026-03-10 01:22:45.118454 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/a7702c3a2f3542948a8cdd7d9e63fdb8/work/docs" 2026-03-10 01:22:45.141284 | 2026-03-10 01:22:45.141478 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-10 01:22:46.106452 | orchestrator | changed: .d..t...... ./ 2026-03-10 01:22:46.106775 | orchestrator | changed: All items complete 2026-03-10 01:22:46.106827 | 2026-03-10 01:22:46.819823 | orchestrator | changed: .d..t...... ./ 2026-03-10 01:22:47.543399 | orchestrator | changed: .d..t...... ./ 2026-03-10 01:22:47.565003 | 2026-03-10 01:22:47.565120 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-10 01:22:47.600507 | orchestrator | skipping: Conditional result was False 2026-03-10 01:22:47.603805 | orchestrator | skipping: Conditional result was False 2026-03-10 01:22:47.626528 | 2026-03-10 01:22:47.626686 | PLAY RECAP 2026-03-10 01:22:47.626762 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-10 01:22:47.626800 | 2026-03-10 01:22:47.752953 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-10 01:22:47.755605 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-10 01:22:48.506425 | 2026-03-10 01:22:48.506603 | PLAY [Base post] 2026-03-10 01:22:48.521892 | 2026-03-10 01:22:48.522051 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-10 01:22:49.523260 | orchestrator | changed 2026-03-10 01:22:49.534015 | 2026-03-10 01:22:49.534160 | PLAY RECAP 2026-03-10 01:22:49.534242 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-10 01:22:49.534324 | 2026-03-10 01:22:49.654704 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-10 01:22:49.655765 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-10 01:22:50.450180 | 2026-03-10 01:22:50.450362 | PLAY [Base post-logs] 2026-03-10 01:22:50.461178 | 2026-03-10 01:22:50.461331 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-10 01:22:50.938256 | localhost | changed 2026-03-10 01:22:50.955063 | 2026-03-10 01:22:50.955273 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-10 01:22:50.994340 | localhost | ok 2026-03-10 01:22:51.001290 | 2026-03-10 01:22:51.001466 | TASK [Set zuul-log-path fact] 2026-03-10 01:22:51.018771 | localhost | ok 2026-03-10 01:22:51.035390 | 2026-03-10 01:22:51.035613 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-10 01:22:51.073628 | localhost | ok 2026-03-10 01:22:51.078756 | 2026-03-10 01:22:51.078948 | TASK [upload-logs : Create log directories] 2026-03-10 01:22:51.573715 | localhost | changed 2026-03-10 01:22:51.576650 | 2026-03-10 01:22:51.576759 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-10 01:22:52.079073 | localhost -> localhost | ok: Runtime: 0:00:00.007379 2026-03-10 01:22:52.088174 | 2026-03-10 01:22:52.088363 | TASK [upload-logs : Upload logs to log server] 2026-03-10 01:22:52.663948 | localhost | Output suppressed because no_log was given 2026-03-10 01:22:52.665828 | 2026-03-10 01:22:52.665927 | LOOP [upload-logs : Compress console log and json output] 2026-03-10 01:22:52.713376 | localhost | skipping: Conditional result was False 2026-03-10 01:22:52.721140 | localhost | skipping: Conditional result was False 2026-03-10 01:22:52.733167 | 2026-03-10 01:22:52.733333 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-10 01:22:52.776736 | localhost | skipping: Conditional result was False 2026-03-10 01:22:52.777033 | 2026-03-10 01:22:52.783731 | localhost | skipping: Conditional result was False 2026-03-10 01:22:52.793525 | 2026-03-10 01:22:52.793761 | LOOP [upload-logs : Upload console log and json output]